Prosecution Insights
Last updated: April 19, 2026
Application No. 19/055,720

METHOD AND APPARATUS WHICH PROVIDE USER INTERFACE FOR ELECTROCARDIOGRAM ANALYSIS

Non-Final OA §101§102§103§112
Filed
Feb 18, 2025
Examiner
SANGHERA, STEVEN G.S.
Art Unit
3684
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Medical AI Co. Ltd.
OA Round
1 (Non-Final)
30%
Grant Probability
At Risk
1-2
OA Rounds
4y 6m
To Grant
60%
With Interview

Examiner Intelligence

Grants only 30% of cases
30%
Career Allow Rate
49 granted / 165 resolved
-22.3% vs TC avg
Strong +30% interview lift
Without
With
+30.4%
Interview Lift
resolved cases with interview
Typical timeline
4y 6m
Avg Prosecution
60 currently pending
Career history
225
Total Applications
across all art units

Statute-Specific Performance

§101
34.2%
-5.8% vs TC avg
§103
40.4%
+0.4% vs TC avg
§102
5.9%
-34.1% vs TC avg
§112
17.7%
-22.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 165 resolved cases

Office Action

§101 §102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 02/18/2025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “input/output unit” in claim 18. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. The input/output unit is described as software, hardware, etc. in page 11. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 112(b) Claim 13 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 13 recites the limitation “the third sub-control graphic” in lines 7 and 18 of the claim. There is insufficient antecedent basis for this limitation in the claim. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-18 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Claims 1-16 are drawn to a method, claim 17 is drawn to a method, and claim 18 is drawn to a computing device, each of which is within the four statutory categories. Claims 1-18 are further directed to an abstract idea on the grounds set out in detail below. As discussed below, the claims do not include additional elements that are sufficient to amount to significantly more than the abstract idea because the additional computer elements, which are recited at a high level of generality, provide conventional computer functions that do not add meaningful limits to practicing the abstract idea (Step 1: YES). Step 2A: Prong One: Claim 1 recites a method of providing a) a user interface (if tied to an electronic device) for electrocardiogram analysis, the method comprising: 1) obtaining bio-data of an electrocardiogram reading target; and b) displaying a first control graphic configured to 2) switch a visual representation and state according to a result of electrocardiogram analysis performed based on the obtained bio-data in a first area of a) the user interface; b) wherein the first control graphic is constructed for each type of disease or electrocardiogram feature that can be read from an electrocardiogram. Claim 1 recites, in part, performing the steps of 1) obtaining bio-data of an electrocardiogram reading target and 2) switch a visual representation and state according to a result of electrocardiogram analysis performed based on the obtained bio-data in a first area of the first user interface (when considered a pen and paper). These steps correspond to Certain Methods of Organizing Human Activity, more particularly, managing personal behavior or relationships or interactions between people (including following rules or instructions). For example, a person can determine how to display data on a sheet of paper after obtaining it. Claim 17 recites a method of providing a user interface for electrocardiogram analysis, the method comprising: 3) providing c) a first user interface (if tied to an electronic device) that implements d) a visualized computing environment in which a first user can perform electrocardiogram analysis in order to generate basic reading information based on bio-data of an electrocardiogram reading target; and 4) providing e) a second user interface (if tied to an electronic device) that implements d) a visualized computing environment in which a second user can perform electrocardiogram analysis in order to generate final reading information based on the bio-data and the basic reading information generated via c) the first user interface; 2) wherein c) the first user interface and e) the second user interface each include a first area adapted to display b) a first control graphic that switches a visual representation and state according to a result of electrocardiogram analysis; and b) wherein the first control graphic is constructed for each type of disease or electrocardiogram feature that can be read from an electrocardiogram. Claim 17 recites, in part, performing the steps of 3) providing a first user interface (when considered a pen and paper) that implements an environment in which a first user can perform electrocardiogram analysis in order to generate basic reading information based on bio-data of an electrocardiogram reading target, 4) providing a second user interface (when considered a pen and paper) that implements an environment in which a second user can perform electrocardiogram analysis in order to generate final reading information based on the bio-data and the basic reading information generated via the first user interface, and 2) wherein the first user interface and the second user interface each include a first area that switches a visual representation and state according to a result of electrocardiogram analysis. These steps correspond to Certain Methods of Organizing Human Activity, more particularly, managing personal behavior or relationships or interactions between people (including following rules or instructions). For example, a person can determine how to display data on a sheet of paper after obtaining it. Claim 18 recites f) a computing device for providing a user interface for electrocardiogram analysis, the computing device comprising: f1) a processor including at least one core; f2) memory including program codes executable by f1) the processor; and f3) an input/output unit configured to 5) provide a) a user interface (if tied to an electronic device); 2) wherein a) the user interface includes a first area adapted to display b) a first control graphic that switches a visual representation and state according to a result of electrocardiogram analysis performed based on bio-data of an electrocardiogram reading target; and wherein the first control graphic is constructed for each type of disease or electrocardiogram feature that can be read from an electrocardiogram. Claim 18 recites, in part, performing the steps of 5) provide a user interface (if pen and paper) and 2) wherein the user interface includes a first area that switches a visual representation and state according to a result of electrocardiogram analysis performed based on bio-data of an electrocardiogram reading target. These steps correspond to Certain Methods of Organizing Human Activity, more particularly, managing personal behavior or relationships or interactions between people (including following rules or instructions). For example, a person can determine how to display data on a sheet of paper after obtaining it. Depending claims 2-16 include all of the limitations of claim 1, and therefore likewise incorporate the above described abstract idea. Depending claims 2-3, 12, and 14 add additional display icon steps. Claim 6 adds the additional step of “wherein, when a user input for a selection method is received, the first sub-control graphic is switched to an ON state or an OFF state” and claim 7 adds the additional steps of “derives a feature associated with at least one of a rhythm and shape of an electrocardiogram signal included in the obtained bio-data” and “determines whether a condition preset to determine an onset of a specific disease is satisfied by analyzing the derived feature”. Additionally, the limitations of depending claims 4-5, 8-11, 13, and 15-16 further specify elements from the claims from which they depend on without adding any additional steps. These additional limitations only further serve to limit the abstract idea. Thus, depending claims 2-16 are nonetheless directed towards fundamentally the same abstract idea as independent claim 1 (Step 2A (Prong One): YES). Prong Two: This judicial exception is not integrated into a practical application. In particular, the claims recite the additional elements of – using a) a user interface, b) displaying a first control graphic (with sub-control graphics from the dependent claims), wherein the first control graphic is constructed for each type of disease or electrocardiogram feature that can be read from an electrocardiogram, c) a first user interface, d) a visualized computing environment, e) a second user interface, f) a computing device for providing a user interface for electrocardiogram analysis, the computing device comprising: f1) a processor including at least one core, f2) memory including program codes executable by the processor, and f3) an input/output unit to perform the claimed steps. The a) user interface, b) displaying a first control graphic, c) first user interface, d) a visualized computing environment, e) second user interface, f) computing device for providing a user interface for electrocardiogram analysis, the computing device comprising: f1) a processor including at least one core, f2) memory including program codes executable by the processor, and f3) an input/output unit in these steps are recited at a high-level of generality (i.e., as generic components performing generic computer functions) such that they amount to no more than mere instructions to apply the exception using generic computer components (see: Applicant’s specification, pages 14-15 where there are generic computing components for these elements, see MPEP 2106.05(f)). Dependent claims recite additional subject matter which amount to limitations consistent with the additional elements in the independent claims. Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide conventional computer implementation and do not impose a meaningful limit to integrate the abstract idea into a practical application. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claims are directed to an abstract idea (Step 2A (Prong Two): NO). Step 2B: The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of using a) a user interface, b) displaying a first control graphic (with sub-control graphics from the dependent claims), wherein the first control graphic is constructed for each type of disease or electrocardiogram feature that can be read from an electrocardiogram, c) a first user interface, d) a visualized computing environment, e) a second user interface, f) a computing device for providing a user interface for electrocardiogram analysis, the computing device comprising: f1) a processor including at least one core, f2) memory including program codes executable by the processor, and f3) an input/output unit to perform the claimed steps amounts to no more than mere instructions to apply the exception using generic computer components that do not offer “significantly more” than the abstract idea itself because the claims do not recite an improvement to another technology or technical field, an improvement to the functioning of any computer itself, or provide meaningful limitations beyond generally linking an abstract idea to a particular technological environment. It should be noted that the claims do not include additional elements that amount to significantly more than the judicial exception because the Specification recites mere generic computer components, as discussed above that are being used to apply certain mental steps, certain method steps of organizing human activity, or certain mathematical steps. Specifically, MPEP 2106.05(f) recites that the following limitations are not significantly more: Adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, e.g., a limitation indicating that a particular function such as creating and maintaining electronic records is performed by a computer, as discussed in Alice Corp., 134 S. Ct. at 2360, 110 USPQ2d at 1984 (see MPEP § 2106.05(f)). The current invention generates a display utilizing a) a user interface, b) displaying a first control graphic (with sub-control graphics), wherein the first control graphic is constructed for each type of disease or electrocardiogram feature that can be read from an electrocardiogram, c) a first user interface, d) a visualized computing environment, e) a second user interface, f) a computing device for providing a user interface for electrocardiogram analysis, the computing device comprising: f1) a processor including at least one core, f2) memory including program codes executable by the processor, and f3) an input/output unit, thus these computing components are adding the words “apply it” with mere instructions to implement the abstract idea on a computer. Mere instructions to apply an exception using generic computer components cannot provide an inventive concept. The claims are not patent eligible (Step 2B: NO). Claims 1-18 are therefore rejected under 35 U.S.C. 101 as being directed to non-statutory subject matter. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1 and 16-18 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by U.S. 2022/0384044 to Ulloa-Cerna et al. As per claim 1, Ulloa-Cerna et al. teaches a method of providing a user interface for electrocardiogram analysis, the method comprising: --obtaining bio-data of an electrocardiogram reading target; (see: paragraph [0006] where there is reception of electrocardiogram trace data associated with a patient) and --displaying a first control graphic configured to switch a visual representation and state according to a result of electrocardiogram analysis performed based on the obtained bio-data in a first area of the user interface; (see: FIG. 2B and paragraph [0051] where there is displaying of a block diagram for a composite model that shows the classification pipeline for ECG trace and other EHR data. Also see: paragraph [0074] where there is receiving of a risk score data indicative of a likelihood the patient will suffer from one of the diseases in the set of cardiology diseases within a predetermined period of time when the trace data was generated. The display is being switched to a result based on the obtained data) --wherein the first control graphic is constructed for each type of disease or electrocardiogram feature that can be read from an electrocardiogram (see: paragraphs [0074] and [0080] where there is displaying of risk scores for various disease. The first control graphic here contains the risk scores for each disease). As per claim 16, Ulloa-Cerna et al. teaches the method of claim 1, see discussion of claim 1. Ulloa-Cerna et al. further teaches wherein, when a number of leads of an electrocardiogram signal included in the bio-data is determined, the first control graphic is reconstructed for each type of disease or electrocardiogram feature that can be read based on the number of leads of the electrocardiogram signal (see: paragraph [0093] where there is a determination of the number of leads and paragraph [0113] where the system displays information based on the analyzed trace data. The system here is configured to reconstruct the graphic based on the analyzed trace data). As per claim 17, Ulloa-Cerna et al. teaches a method of providing a user interface for electrocardiogram analysis, the method comprising: --providing a first user interface that implements a visualized computing environment in which a first user can perform electrocardiogram analysis in order to generate basic reading information based on bio-data of an electrocardiogram reading target; (see: paragraph [0006] where there is reception of electrocardiogram trace data associated with a patient. There is an interface here used to collect this data) and --providing a second user interface that implements a visualized computing environment in which a second user can perform electrocardiogram analysis in order to generate final reading information based on the bio-data and the basic reading information generated via the first user interface; (see: paragraph [0113] where there is an ECG analysis application of a second computing device. The device here has an interface and can perform an analysis on the electrocardiogram trace data) --wherein the first user interface and the second user interface each include a first area adapted to display a first control graphic that switches a visual representation and state according to a result of electrocardiogram analysis; (see: FIG. 2B and paragraph [0051] where there is displaying of a block diagram for a composite model that shows the classification pipeline for ECG trace and other EHR data. Also see: paragraph [0074] where there is receiving of a risk score data indicative of a likelihood the patient will suffer from one of the diseases in the set of cardiology diseases within a predetermined period of time when the trace data was generated. The display is being switched to a result based on the obtained data) and --wherein the first control graphic is constructed for each type of disease or electrocardiogram feature that can be read from an electrocardiogram (see: paragraphs [0074] and [0080] where there is displaying of risk scores for various disease. The first control graphic here contains the risk scores for each disease). As per claim 18, Ulloa-Cerna et al. teaches a computing device for providing a user interface for electrocardiogram analysis, the computing device comprising: --a processor including at least one core; (see: paragraph [0117] where there is a processor with at least one core) --memory including program codes executable by the processor; (see: paragraph [0117] where there is a memory and a program code which is being executed) and --an input/output unit configured to provide a user interface; (see: paragraph [0117] where there is an input/output unit of a display which provides a user interface) --wherein the user interface includes a first area adapted to display a first control graphic that switches a visual representation and state according to a result of electrocardiogram analysis performed based on bio-data of an electrocardiogram reading target; (see: FIG. 2B and paragraph [0051] where there is displaying of a block diagram for a composite model that shows the classification pipeline for ECG trace and other EHR data. Also see: paragraph [0074] where there is receiving of a risk score data indicative of a likelihood the patient will suffer from one of the diseases in the set of cardiology diseases within a predetermined period of time when the trace data was generated. The display is being switched to a result based on the obtained data) and --wherein the first control graphic is constructed for each type of disease or electrocardiogram feature that can be read from an electrocardiogram (see: paragraphs [0074] and [0080] where there is displaying of risk scores for various disease. The first control graphic here contains the risk scores for each disease). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 2-3 and 7 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. 2022/0384044 to Ulloa-Cerna et al. in view of U.S. 2019/00295700 to Weinstock et al. As per claim 2, Ulloa-Cerna et al. teaches the method of claim 1, see discussion of claim 1. Ulloa-Cerna et al. further teaches wherein: --the first control graphic is a button-type graphic; (see: paragraphs [0118] and [0122] where there are actionable buttons). Ulloa-Cerna et al. may not further, specifically teach: 1) --a button-type graphic configured to dynamically switch a visual representation and state in response to a command to input the result of the electrocardiogram analysis; and 2) --the command is generated automatically when an operation of performing the electrocardiogram analysis is completed, or is generated through a user operation for inputting the result of the electrocardiogram analysis. Weinstock et al. teaches: 1) --a button-type graphic configured to dynamically switch a visual representation and state in response to a command to input the result of the electrocardiogram analysis; (see: paragraphs [0141] and [0148] where the screen switches to an uploading option for uploading test results) and 2) --the command is generated automatically when an operation of performing the electrocardiogram analysis is completed, or is generated through a user operation for inputting the result of the electrocardiogram analysis (see: paragraphs [0141] and [0148] where the screen switches to an uploading option for uploading test results. The command here is automatically generated when the user wants to upload their results on the device as the device screen here displays other information when not uploading data). One of ordinary skill before the effective filing date of the claimed invention would have found it obvious to have 1) a button-type graphic configured to dynamically switch a visual representation and state in response to a command to input the result of the electrocardiogram analysis and have 2) the command is generated automatically when an operation of performing the electrocardiogram analysis is completed, or is generated through a user operation for inputting the result of the electrocardiogram analysis as taught by Weinstock et al. in the method as taught by Ulloa-Cerna et al. with the motivation(s) of improving access to data (see: paragraph [0020] of Weinstock et al.). As per claim 3, Ulloa-Cerna et al. teaches the method of claim 1, see discussion of claim 1. Ulloa-Cerna et al. may not further, specifically teach wherein the first control graphic comprises at least one of: --a first sub-control graphic configured to dynamically switch a visual representation and state according to an analysis result obtained by inputting electrocardiogram data, included in the obtained bio-data, to a pre-trained artificial intelligence model; --a second sub-control graphic configured to dynamically switch a visual representation and state according to an analysis result obtained according to a logic predetermined for each type of disease or electrocardiogram feature that can be read from the electrocardiogram; and --a third sub-control graphic configured to dynamically switch a visual representation and state based on a user operation for inputting an analysis result obtained by a user who analyzes the electrocardiogram. Weinstock et al. teaches: --a first sub-control graphic configured to dynamically switch a visual representation and state according to an analysis result obtained by inputting electrocardiogram data, included in the obtained bio-data, to a pre-trained artificial intelligence model; (see: below) --a second sub-control graphic configured to dynamically switch a visual representation and state according to an analysis result obtained according to a logic predetermined for each type of disease or electrocardiogram feature that can be read from the electrocardiogram; (see: below) and --a third sub-control graphic configured to dynamically switch a visual representation and state based on a user operation for inputting an analysis result obtained by a user who analyzes the electrocardiogram (see: paragraphs [0141] and [0148] where the screen switches to an uploading option for uploading test results. The command here is automatically generated when the user wants to upload their results on the device as the device screen here displays other information when not uploading data). One of ordinary skill before the effective filing date of the claimed invention would have found it obvious to have a first sub-control graphic configured to dynamically switch a visual representation and state according to an analysis result obtained by inputting electrocardiogram data, included in the obtained bio-data, to a pre-trained artificial intelligence model, a second sub-control graphic configured to dynamically switch a visual representation and state according to an analysis result obtained according to a logic predetermined for each type of disease or electrocardiogram feature that can be read from the electrocardiogram, and a third sub-control graphic configured to dynamically switch a visual representation and state based on a user operation for inputting an analysis result obtained by a user who analyzes the electrocardiogram as taught by Weinstock et al. in the method as taught by Ulloa-Cerna et al. with the motivation(s) of improving access to data (see: paragraph [0020] of Weinstock et al.). As per claim 7, Ulloa-Cerna et al. and Weinstock et al. in combination teaches the method of claim 3, see discussion of claim 3. Ulloa-Cerna et al. further teaches wherein the predetermined logic: --derives a feature associated with at least one of a rhythm and shape of an electrocardiogram signal included in the obtained bio-data; (see: paragraph [0105] where there is a determination of a rhythm obtained in the data) and --determines whether a condition preset to determine an onset of a specific disease is satisfied by analyzing the derived feature (see: paragraph [0105] where the determination of rhythm also includes a determination of if the rhythms are similar). Claims 4, 6, 8, and 10 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. 2022/0384044 to Ulloa-Cerna et al. in view of U.S. 2019/00295700 to Weinstock et al. as applied to claim 3, and further in view of U.S. 2022/0133282 to Sexton. As per claim 4, Ulloa-Cerna et al. and Weinstock et al. in combination teaches the method of claim 3, see discussion of claim 3. The combination may not further, specifically teach wherein: --when the analysis result in which a disease that can be read from an electrocardiogram has occurred is obtained via the artificial intelligence model, the first sub-control graphic is switched from an OFF state to an ON state; and --when the analysis result in which a disease that can be read from an electrocardiogram has not occurred is obtained via the artificial intelligence model, the first sub-control graphic is maintained in an OFF state. Sexton teaches: --when the analysis result in which a disease that can be read from an electrocardiogram has occurred is obtained via the artificial intelligence model, the first sub-control graphic is switched from an OFF state to an ON state; (see: paragraph [0075] where there is a result which there is a positive test result, a graphic for a notification is being switched to an ON state) and --when the analysis result in which a disease that can be read from an electrocardiogram has not occurred is obtained via the artificial intelligence model, the first sub-control graphic is maintained in an OFF state (see: paragraph [0075] where there is a result which there is a negative test result, a graphic for a notification is stays in an OFF state). One of ordinary skill before the effective filing date of the claimed invention would have found it obvious to when the analysis result in which a disease that can be read from an electrocardiogram has occurred is obtained via the artificial intelligence model, the first sub-control graphic is switched from an OFF state to an ON state and when the analysis result in which a disease that can be read from an electrocardiogram has not occurred is obtained via the artificial intelligence model, the first sub-control graphic is maintained in an OFF state as taught by Sexton in the method as taught by Ulloa-Cerna et al. and Weinstock et al. in combination with the motivation(s) of alerting individuals of risk (see: paragraph [0028] of Sexton). As per claim 6, Ulloa-Cerna et al. and Weinstock et al. in combination teaches the method of claim 3, see discussion of claim 3. The combination may not further, specifically teach wherein, when a user input for a selection method is received, the first sub-control graphic is switched to an ON state or an OFF state. Sexton teaches: --wherein, when a user input for a selection method is received, the first sub-control graphic is switched to an ON state or an OFF state (see: paragraph [0075] where there is a result is being received which is either a positive test result or negative one, a graphic for a notification is being switched to an ON state or being maintained in an OFF state). One of ordinary skill before the effective filing date of the claimed invention would have found it obvious to wherein, when a user input for a selection method is received, the first sub-control graphic is switched to an ON state or an OFF state as taught by Sexton in the method as taught by Ulloa-Cerna et al. and Weinstock et al. in combination with the motivation(s) of alerting individuals of risk (see: paragraph [0028] of Sexton). As per claim 8, Ulloa-Cerna et al. and Weinstock et al. in combination teaches the method of claim 7, see discussion of claim 7. The combination may not further, specifically teach wherein: --when it is determined that the feature satisfies the condition, the second sub-control graphic is switched from an OFF state to an ON state; and --when it is determined that the feature does not satisfy the condition, the second sub-control graphic is maintained in an OFF state. Sexton teaches: --when it is determined that the feature satisfies the condition, the second sub-control graphic is switched from an OFF state to an ON state; (see: paragraph [0075] where there is a result which there is a positive test result, a graphic for a notification is being switched to an ON state) and --when it is determined that the feature does not satisfy the condition, the second sub-control graphic is maintained in an OFF state (see: paragraph [0075] where there is a result which there is a negative test result, a graphic for a notification is stays in an OFF state). One of ordinary skill before the effective filing date of the claimed invention would have found it obvious to when it is determined that the feature satisfies the condition, the second sub-control graphic is switched from an OFF state to an ON state and when it is determined that the feature does not satisfy the condition, the second sub-control graphic is maintained in an OFF state as taught by Sexton in the method as taught by Ulloa-Cerna et al. and Weinstock et al. in combination with the motivation(s) of alerting individuals of risk (see: paragraph [0028] of Sexton). As per claim 10, Ulloa-Cerna et al. and Weinstock et al. in combination teaches the method of claim 3, see discussion of claim 3. The combination may not further, specifically teach wherein, when a user input for a selection method is received, the second sub-control graphic is switched to an ON state or an OFF state. Sexton teaches: --wherein, when a user input for a selection method is received, the second sub-control graphic is switched to an ON state or an OFF state (see: paragraph [0075] where there is a result which there is a positive test result, a graphic for a notification is being switched to an ON state). One of ordinary skill before the effective filing date of the claimed invention would have found it obvious to have wherein, when a user input for a selection method is received, the second sub-control graphic is switched to an ON state or an OFF state as taught by Sexton in the method as taught by Ulloa-Cerna et al. and Weinstock et al. in combination with the motivation(s) of alerting individuals of risk (see: paragraph [0028] of Sexton). Claims 5 and 9 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. 2022/0384044 to Ulloa-Cerna et al. in view of U.S. 2019/00295700 to Weinstock et al. further in view of U.S. 2022/0133282 to Sexton as applied to claim 4, and further in view of U.S. 2021/0098089 to Choi. As per claim 5, Ulloa-Cerna et al., Weinstock et al., and Sexton in combination teaches the method of claim 4, see discussion of claim 4. The combination may not further, specifically teach wherein, when being switched from an OFF state to an ON state, the first sub-control graphic dynamically switches a color according to severity that can be read based on the electrocardiogram analysis result obtained by the artificial intelligence model. Choi teaches: --wherein, when being switched from an OFF state to an ON state, the first sub-control graphic dynamically switches a color according to severity that can be read based on the electrocardiogram analysis result obtained by the artificial intelligence model (see: paragraph [0095] where there is color alerting of severity of the disease). One of ordinary skill before the effective filing date of the claimed invention would have found it obvious to have wherein, when being switched from an OFF state to an ON state, the first sub-control graphic dynamically switches a color according to severity that can be read based on the electrocardiogram analysis result obtained by the artificial intelligence model as taught by Choi in the method as taught by Ulloa-Cerna et al., Weinstock et al., and Sexton in combination with the motivation(s) of notifying of disease status (see: paragraph [0087] of Choi). As per claim 9, Ulloa-Cerna et al., Weinstock et al., and Sexton in combination teaches the method of claim 8, see discussion of claim 8. The combination may not further, specifically teach when being switched from an OFF state to an ON state, the second sub-control graphic dynamically switches a color according to severity that can be read based on the analysis result obtained according to the predetermined logic. Choi teaches: --when being switched from an OFF state to an ON state, the second sub-control graphic dynamically switches a color according to severity that can be read based on the analysis result obtained according to the predetermined logic (see: paragraph [0095] where there is color alerting of severity of the disease). One of ordinary skill before the effective filing date of the claimed invention would have found it obvious to have when being switched from an OFF state to an ON state, the second sub-control graphic dynamically switches a color according to severity that can be read based on the analysis result obtained according to the predetermined logic as taught by Choi in the method as taught by Ulloa-Cerna et al., Weinstock et al., and Sexton in combination with the motivation(s) of notifying of disease status (see: paragraph [0087] of Choi). Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over U.S. 2022/0384044 to Ulloa-Cerna et al. in view of U.S. 2019/00295700 to Weinstock et al. as applied to claim 3, and further in view of U.S. 2021/0098089 to Choi. As per claim 11, Ulloa-Cerna et al. and Weinstock et al. in combination teaches the method of claim 3, see discussion of claim 3. The combination may not further, specifically teach wherein, when a user input for a selection method is received, the third sub-control graphic dynamically switches at least one of a color and a state. Choi teaches: --wherein, when a user input for a selection method is received, the third sub-control graphic dynamically switches at least one of a color and a state (see: paragraph [0095] where there is color alerting of severity of the disease). One of ordinary skill before the effective filing date of the claimed invention would have found it obvious to have wherein, when a user input for a selection method is received, the third sub-control graphic dynamically switches at least one of a color and a state as taught by Choi in the method as taught by Ulloa-Cerna et al. and Weinstock et al. in combination with the motivation(s) of notifying of disease status (see: paragraph [0087] of Choi). Claims 12 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. 2022/0384044 to Ulloa-Cerna et al. in view of U.S. 2020/0395129 to Kalkstein et al. As per claim 12, Ulloa-Cerna et al. teaches the method of claim 1, see discussion of claim 1. Ulloa-Cerna et al. may not further, specifically teach further comprising displaying a second control graphic configured to display a reading text generated by synthesizing analysis results displayed via the first control graphic in a second area of the user interface. Kalkstein et al. teaches: --further comprising displaying a second control graphic configured to display a reading text generated by synthesizing analysis results displayed via the first control graphic in a second area of the user interface (see: paragraphs [0049] and [0190] where there is presentation of a summary of multiple aggregations of data). One of ordinary skill before the effective filing date of the claimed invention would have found it obvious to further comprise displaying a second control graphic configured to display a reading text generated by synthesizing analysis results displayed via the first control graphic in a second area of the user interface as taught by Kalkstein et al. in the method as taught by Ulloa-Cerna et al. with the motivation(s) of improving the analysis (see: paragraph [0007] of Kalkstein et al.). As per claim 14, Ulloa-Cerna et al. teaches the method of claim 1, see discussion of claim 1. Ulloa-Cerna et al. may not further, specifically teach further comprising displaying a third control graphic configured to represent reading results, generated by synthesizing analysis results displayed via the first control graphic, in color in a third area of the user interface. Kalkstein et al. teaches: --further comprising displaying a third control graphic configured to represent reading results, generated by synthesizing analysis results displayed via the first control graphic, in color in a third area of the user interface (see: paragraphs [0049] and [0190] where there is presentation of a summary of multiple aggregations of data). One of ordinary skill before the effective filing date of the claimed invention would have found it obvious to further comprise displaying a third control graphic configured to represent reading results, generated by synthesizing analysis results displayed via the first control graphic, in color in a third area of the user interface as taught by Kalkstein et al. in the method as taught by Ulloa-Cerna et al. with the motivation(s) of improving the analysis (see: paragraph [0007] of Kalkstein et al.). Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over U.S. 2022/0384044 to Ulloa-Cerna et al.in view of U.S. 2019/00295700 to Weinstock et al. as further in view of U.S. 2020/0395129 to Kalkstein et al. as applied to claim 14, and further in view of U.S. 2021/0098089 to Choi. As per claim 15, Ulloa-Cerna et al. and Kalkstein et al. in combination teaches the method of claim 14, see discussion of claim 14. The combination may not further, specifically teach wherein the third control graphic dynamically switches a color according to severity read by synthesizing analysis results displayed via the first control graphic. Choi teaches: --wherein the third control graphic dynamically switches a color according to severity read by synthesizing analysis results displayed via the first control graphic (see: paragraph [0095] where there is color alerting of severity of the disease). One of ordinary skill before the effective filing date of the claimed invention would have found it obvious to have wherein the third control graphic dynamically switches a color according to severity read by synthesizing analysis results displayed via the first control graphic as taught by Choi in the method as taught by Ulloa-Cerna et al. and Kalkstein et al. in combination with the motivation(s) of notifying of disease status (see: paragraph [0087] of Choi). No Art Rejections - Claim 13 Based on the present construction of the claims, claim 13 does not have an art rejection. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Steven G.S. Sanghera whose telephone number is (571)272-6873. The examiner can normally be reached M-F 7:30-5:00 (alternating Fri). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Shahid Merchant can be reached at 571-270-1360. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /STEVEN G.S. SANGHERA/Primary Examiner, Art Unit 3684
Read full office action

Prosecution Timeline

Feb 18, 2025
Application Filed
Feb 03, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12573497
SYSTEMS AND METHODS FOR AUTOMATING WORKFLOWS
2y 5m to grant Granted Mar 10, 2026
Patent 12558015
ENHANCED COMPUTATIONAL HEART SIMULATIONS
2y 5m to grant Granted Feb 24, 2026
Patent 12551170
PROVIDING A VISUAL REPRESENTATION OF PATIENT MONITORING DATA
2y 5m to grant Granted Feb 17, 2026
Patent 12469583
SYSTEM AND METHOD FOR PROCESSING PATIENT-RELATED MEDICAL DATA
2y 5m to grant Granted Nov 11, 2025
Patent 12437870
GENERATION OF DATASETS FOR MACHINE LEARNING MODELS AND AUTOMATED PREDICTIVE MODELING OF OCULAR SURFACE DISEASE
2y 5m to grant Granted Oct 07, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
30%
Grant Probability
60%
With Interview (+30.4%)
4y 6m
Median Time to Grant
Low
PTA Risk
Based on 165 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month