Prosecution Insights
Last updated: April 19, 2026
Application No. 18/649,128

IMAGE CAPTURE SYSTEM USING AN IMAGER CODE SCANNER AND INFORMATION MANAGEMENT SYSTEM

Non-Final OA §101§102§103§112§DP
Filed
Apr 29, 2024
Examiner
NAJARIAN, LENA
Art Unit
3687
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Draeger Medical Systems Inc.
OA Round
1 (Non-Final)
38%
Grant Probability
At Risk
1-2
OA Rounds
5y 0m
To Grant
78%
With Interview

Examiner Intelligence

Grants only 38% of cases
38%
Career Allow Rate
178 granted / 464 resolved
-13.6% vs TC avg
Strong +39% interview lift
Without
With
+39.3%
Interview Lift
resolved cases with interview
Typical timeline
5y 0m
Avg Prosecution
41 currently pending
Career history
505
Total Applications
across all art units

Statute-Specific Performance

§101
26.9%
-13.1% vs TC avg
§103
31.9%
-8.1% vs TC avg
§102
11.5%
-28.5% vs TC avg
§112
25.4%
-14.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 464 resolved cases

Office Action

§101 §102 §103 §112 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 23-25 are rejected under 35 U.S.C. §101 because the claimed invention is directed to an abstract idea without significantly more. Step 1: Claims 23-25 are directed to a method (i.e., a process). Accordingly, claims 23-25 are all within at least one of the four statutory categories. Step 2A - Prong One: Regarding Prong One of Step 2A, the claim limitations are to be analyzed to determine whether, under their broadest reasonable interpretation, they “recite” a judicial exception or in other words whether a judicial exception is “set forth” or “described” in the claims. An “abstract idea” judicial exception is subject matter that falls within at least one of the following groupings: a) certain methods of organizing human activity, b) mental processes, and/or c) mathematical concepts. Independent claim 23 includes limitations that recite at least one abstract idea. Specifically, the independent claim recites: 23. A method of recording medical image data using an imaging device being configured to capture and output images, the method comprising: initializing, by a user interface system, an image capture session in response to a first user input; generating, by the imaging device, a captured image in response to a second user input; automatically transmitting, by the imaging device, the captured image to the user interface system in response to generating the captured image, wherein the captured image is medically related to a patient; automatically associating, by the user interface system, the captured image to the patient upon receipt; and storing, by the user interface system, the received captured image in an electronic medical record (EMR) of the patient associated with the captured image. The Examiner submits that the foregoing underlined limitations constitute “certain methods of organizing human activity” because initializing an image capture session in response to a first user input and associating the captured image to the patient upon receipt amount to managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions), at the currently claimed high level of generality. Accordingly, the claim recites at least one abstract idea. Step 2A - Prong Two: Regarding Prong Two of Step 2A, it must be determined whether the claim as a whole integrates the abstract idea into a practical application. It must be determined whether any additional elements in the claim beyond the abstract idea integrate the exception into a practical application in a manner that imposes a meaningful limit on the judicial exception. The courts have indicated that additional elements merely using a computer to implement an abstract idea, adding insignificant extra solution activity, or generally linking use of a judicial exception to a particular technological environment or field of use do not integrate a judicial exception into a “practical application.” The limitations of claim 23, as drafted, is a process that, under its broadest reasonable interpretation, covers certain methods of organizing human activity but for the recitation of generic computer components. That is, other than reciting an imaging device and a user interface system to perform the limitations, nothing in the claim elements precludes the steps from practically being certain methods of organizing human activity. If a claim limitation, under its broadest reasonable interpretation, covers certain methods of organizing human activity but for the recitation of generic computer components, then it falls within the “certain methods of organizing human activity” grouping of abstract ideas. Accordingly, the claims recite an abstract idea. This judicial exception is not integrated into a practical application. In particular, the imaging device and user interface system are recited at a high-level of generality (i.e., as generic computer components performing generic computer functions of generating data, transmitting data, associating data, and storing data) such that it amounts no more than mere instructions to apply the exception using generic computer components. The claim recites the additional limitation of an imaging device generating a captured image. Such step would be routinely used by those of ordinary skill in the art and is a well-understood, routine and conventional activity specified at a high level of generality. It is mere data gathering in conjunction with the abstract idea and therefore adds insignificant extrasolution activity to the judicial exception. Accordingly, these additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claims are directed to an abstract idea. Thus, taken alone, the additional elements do not amount to significantly more than the above-identified judicial exception (the abstract idea). Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. For instance, there is no indication that the additional elements, when considered as a whole, reflect an improvement in the functioning of a computer or an improvement to another technology or technical field, apply or use the above-noted judicial exception to effect a particular treatment or prophylaxis for a disease or medical condition, implement/use the above-noted judicial exception with a particular machine or manufacture that is integral to the claim, effect a transformation or reduction of a particular article to a different state or thing, or apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is not more than a drafting effort designed to monopolize the exception (see MPEP § 2106.05). Their collective functions merely provide conventional computer implementation. Claims 24-25 are ultimately dependent from Claim(s) 23 and include all the limitations of Claim(s) 23. Therefore, claim(s) 24 and 25 recite the same abstract idea. Claims 24 and 25 describe further limitations regarding transmitting the image and reconfiguring into an image capture mode. These are all just further describing the abstract idea recited in claim 23, without adding significantly more. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements amount to no more than mere instructions to apply the exception using generic computer components. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claims are not patent eligible. Step 2B: Regarding Step 2B, independent claim 23 does not include additional elements (considered both individually and as an ordered combination) that are sufficient to amount to significantly more than the judicial exception for reasons the same as those discussed above with respect to determining that the claim does not integrate the abstract idea into a practical application. Regarding the additional limitations directed to the imaging device automatically transmitting the captured image to the user interface system and the user interface system storing the received captured image in an electronic medical record (EMR), all of which the Examiner submits merely add insignificant extra-solution activity to the abstract idea or are claimed in a merely generic manner (e.g., at a high level of generality), the Examiner further submits that such steps are not unconventional as they merely consist of receiving or transmitting data over a network and storing and retrieving information in memory. See MPEP 2106.05(d)(II). The dependent claims do not include additional elements (considered both individually and as an ordered combination) that are sufficient to amount to significantly more than the judicial exception for the same reasons to those discussed above with respect to determining that the dependent claims do not integrate the at least one abstract idea into a practical application. Therefore, claims 23-25 are ineligible under 35 USC §101. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 34-39 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 13, 15, and 18 of U.S. Patent No. 11,998,368. Although the claims at issue are not identical, they are not patentably distinct from each other because the limitations of claim 34 of this application are substantially similar to limitations in claim 1 of U.S. Patent No. 11,998,368. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 26-29 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 26 recites the limitation "the first user input applied at the imaging device" in lines 20-21. There is insufficient antecedent basis for this limitation in the claim. This language is unclear because the claim recited “a first user input applied at the user interface system” in an earlier step. Claims 27-29 incorporate the deficiencies of claim 26, through dependency, and are therefore also rejected. Claim Objections Claim 26 is objected to because of the following informalities: change “an image capture mode” to “the image capture mode” at lines 14-15. Appropriate correction is required. Claim 34 is objected to because of the following informalities: change “the specific patient“ to “a specific patient” at line 16. Appropriate correction is required. Claim 35 is objected to because of the following informalities: change “converts to the code information“ to “converts . Appropriate correction is required. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 23-25 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Dehghan Marvast et al. (US 2019/0183366 A1). (A) Referring to claim 23, Dehghan Marvast discloses A method of recording medical image data using an imaging device being configured to capture and output images, the method comprising (see Fig. 1A, para. 35 & 53 of Dehghan Marvast; The echocardiograph equipment 110 captures images using an associated mode and may capture multiple images of the patient's anatomy, e.g., the chest, and in particular the heart in the case of an echocardiography image, from a variety of different viewpoints to compile a medical imaging study 115 of the patient which may be stored in the medical image storage 140 of the automated echocardiograph measurement extraction system 100.): initializing, by a user interface system, an image capture session in response to a first user input (Fig. 4, para. 38, 70, 71, and 74 of Dehghan Marvast; As the sonographer is conducting the medical imaging study of the patient, feedback is provided to the sonographer as to which medical image viewpoints still need to be captured, which measurements have been able to be extracted, and when the medical imaging study has been completed. The medical images of the medical image study data 115 that are captured by the echocardiograph equipment 110, after having been classified according to their mode and viewpoint by the mode recognition component 120 and viewpoint classification component 130, may be dynamically stored in the medical image storage system 140 as the medical imaging study of the patient is ongoing and the operations of the automated echocardiograph measurement extraction system 150 of the illustrative embodiments may be dynamically performed on sets of the captured images or in response to a request from a user, such as a sonographer or other technician or operator of the echocardiography equipment 110.); generating, by the imaging device, a captured image in response to a second user input (para. 20, 66, and 69-71 of Dehghan Marvast; a sonographer or other medical imaging subject matter expert (SME) may utilize a client computing device 210 to access the services and functionality provided by the cognitive system 200 and the medical image viewer/report generator application 230 to view medical images of one or more medical imaging studies stored in the corpus 240 for one or more patients and/or corresponding reports detailing the echocardiograph measurements automatically extracted from the medical images by the automated echocardiograph measurement extraction system 100. The user of the client computing device 210 may view the medical images and perform operations for annotating the medical images, adding notes to patient electronic medical records (EMRs), and any of a plethora of other operations that may be performed through human-computer interaction based on the human's viewing of the medical images via the cognitive system 200. The cognitive system 200 may determine which medical images, e.g., which viewpoints, still need to be captured in order to provide the missing measurements. In addition, once all measurements have been able to be extracted by the automated echocardiograph measurement extraction system 100, the cognitive system 200 may further determine that further medical image capture is unnecessary. The cognitive system 200 may provide indications of these various determinations to the human sonographer via the computing device 212. Thus, as the sonographer is conducting the medical imaging study of the patient, feedback is provided to the sonographer as to which medical image viewpoints still need to be captured, which measurements have been able to be extracted, and when the medical imaging study has been completed.); automatically transmitting, by the imaging device, the captured image to the user interface system in response to generating the captured image, wherein the captured image is medically related to a patient (Fig. 4, para. 3, 35, 74, and 93 of Dehghan Marvast; As shown in FIG. 4, the operation starts by receiving a plurality of medical images as part of a medical imaging study for extraction of echocardiograph measurements (step 410). The modes of the received medical images are determined (step 420) and the viewpoints of the medical images are classified (step 430). A subset of medical images is selected based on a selected mode and viewpoints, where the subset of medical images may have various different viewpoints (step 440). The selected medical images of the same mode, but with varying viewpoints, are input to a trained convolutional neural network (step 450) which operates on the medical image data to extract echocardiograph measurements from the medical images based on a learned association of medical image viewpoints and corresponding echocardiograph measurements (step 460). The extracted echocardiograph measurements are then output to a cognitive system and/or medical image viewer for reporting of the extracted echocardiograph measurements and/or performance of one or more cognitive operations (step 470). The medical image viewer application 230 provides the logic for rendering medical images such that a user may view the medical images and the corresponding annotations or labels based on the automatically extracted echocardiograph measurements, manipulate the view via a graphical user interface, and the like. The medical image viewer application 230 may comprise various types of graphical user interface elements for presenting medical image information to the user); automatically associating, by the user interface system, the captured image to the patient upon receipt (para. 35 & 66 of Dehghan Marvast; The medical imaging study 115 may be stored as one or more data structures, e.g., a separate data structure for each captured medical image, medical imaging study, or the like, in a medical image storage system 140 in association with an identifier of the patient and/or electronic medical records (EMRs) of the patient.); and storing, by the user interface system, the received captured image in an electronic medical record (EMR) of the patient associated with the captured image (para. 35, 66, 74, and 78 of Dehghan Marvast; The medical imaging study 115 may be stored as one or more data structures, e.g., a separate data structure for each captured medical image, medical imaging study, or the like, in a medical image storage system 140 in association with an identifier of the patient and/or electronic medical records (EMRs) of the patient. The user of the client computing device 210 may view the medical images and perform operations for annotating the medical images, adding notes to patient electronic medical records (EMRs), and any of a plethora of other operations that may be performed through human-computer interaction based on the human's viewing of the medical images via the cognitive system 200.). (B) Referring to claim 24, Dehghan Marvast discloses wherein storing the received captured image in the EMR of the patient includes transmitting the received captured image to a data server that stores the EMR of the patient (para. 35, 36, 82, 91, and 93 of Dehghan Marvast). (C) Referring to claim 25, Dehghan Marvast discloses further comprising: reconfiguring, by the user interface system, the imaging device into an image capture mode in response to initializing the image capture session (para. 35, 36, 81, and 82 of Dehghan Marvast). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 26-28, 34, and 37-39 is/are rejected under 35 U.S.C. 103 as being unpatentable over Dehghan Marvast et al. (US 2019/0183366 A1) in view of Chiu et al. (US 2009/0212113 A1). (A) Referring to claim 26, Dehghan Marvast discloses A method of recording medical image data in an electronic medical record (EMR) associated with a patient using an imaging device, the method comprising (see Fig. 1A, para. 35 & 53 of Dehghan Marvast; The echocardiograph equipment 110 captures images using an associated mode and may capture multiple images of the patient's anatomy, e.g., the chest, and in particular the heart in the case of an echocardiography image, from a variety of different viewpoints to compile a medical imaging study 115 of the patient which may be stored in the medical image storage 140 of the automated echocardiograph measurement extraction system 100. The medical imaging study 115 may be stored as one or more data structures, e.g., a separate data structure for each captured medical image, medical imaging study, or the like, in a medical image storage system 140 in association with an identifier of the patient and/or electronic medical records (EMRs) of the patient.): initializing, by a user interface system, an image capture session in response to a first user input applied at the user interface system (Fig. 4, para. 38, 70, 71, and 74 of Dehghan Marvast; As the sonographer is conducting the medical imaging study of the patient, feedback is provided to the sonographer as to which medical image viewpoints still need to be captured, which measurements have been able to be extracted, and when the medical imaging study has been completed. The medical images of the medical image study data 115 that are captured by the echocardiograph equipment 110, after having been classified according to their mode and viewpoint by the mode recognition component 120 and viewpoint classification component 130, may be dynamically stored in the medical image storage system 140 as the medical imaging study of the patient is ongoing and the operations of the automated echocardiograph measurement extraction system 150 of the illustrative embodiments may be dynamically performed on sets of the captured images or in response to a request from a user, such as a sonographer or other technician or operator of the echocardiography equipment 110.); determining, by the user interface system, a first operation mode of the imaging device, the first operation mode being an active one of the plurality of operation modes at a time the image capture session is initialized (para. 15, 35, and 38 of Dehghan Marvast; various types of echocardiograph equipment 110 may be utilized and echocardiograph images may be provided in different modes, e.g., A-Mode, B-Mode, Doppler, M-Mode, etc. The echocardiograph equipment 110 captures images using an associated mode and may capture multiple images of the patient's anatomy, e.g., the chest, and in particular the heart in the case of an echocardiography image, from a variety of different viewpoints to compile a medical imaging study 115 of the patient which may be stored in the medical image storage 140 of the automated echocardiograph measurement extraction system 100.); storing, by the user interface system, the first operation mode in a memory device (para. 15, 35, 36, and 39 of Dehghan Marvast; With echocardiography, different modes (e.g., A-mode, where a single transducer scans a line through the body with the echoes plotted as a function of depth, or B-mode which displays the acoustic impedance of a two-dimensional cross-section of tissue) and viewpoints of medical images are taken at various cardiac phases. Based on the classification of mode by the mode recognition component 120, medical images of a mode of interest may be selected from those that are stored in the medical image storage 140, for a particular patient, for use in training/testing, or runtime execution. Separate instances of the echocardiograph measurement extraction component 150 may be implemented for different modes, e.g., one instance for B-mode medical images, one instance for M-mode medical images, and one instance for Doppler mode medical images. The mode recognition component 120 analyzes each of the medical images in the medical image study data 115 to classify the medical image into different modes.); determining, by the user interface system, whether the first operation mode is an image capture mode (para. 35 of Dehghan Marvast; various types of echocardiograph equipment 110 may be utilized and echocardiograph images may be provided in different modes, e.g., A-Mode, B-Mode, Doppler, M-Mode, etc. The echocardiograph equipment 110 captures images using an associated mode and may capture multiple images of the patient's anatomy, e.g., the chest, and in particular the heart in the case of an echocardiography image, from a variety of different viewpoints to compile a medical imaging study 115 of the patient which may be stored in the medical image storage 140 of the automated echocardiograph measurement extraction system 100.); on a first condition that the first operation mode is not the image capture mode, configuring, by the user interface system, the imaging device into the image capture mode (para. 35, 36, 39, and 70 of Dehghan Marvast; various types of echocardiograph equipment 110 may be utilized and echocardiograph images may be provided in different modes, e.g., A-Mode, B-Mode, Doppler, M-Mode, etc. The echocardiograph equipment 110 captures images using an associated mode and may capture multiple images of the patient's anatomy, e.g., the chest, and in particular the heart in the case of an echocardiography image, from a variety of different viewpoints to compile a medical imaging study 115 of the patient which may be stored in the medical image storage 140 of the automated echocardiograph measurement extraction system 100. The cognitive system 200 may evaluate the medical imaging study that is being performed, identify which measurements are to be extracted from the medical images based on the type of medical imaging study, and, through the learning performed by the CNN 160 with regard to associations of medical image viewpoints and corresponding echocardiograph measurements, may determine which medical images need to be captured as part of the medical imaging study.); on a second condition that the first operation mode is the image capture mode, maintaining, by the user interface system, the imaging device in the image capture mode (para. 35, 38, and 70 of Dehghan Marvast; various types of echocardiograph equipment 110 may be utilized and echocardiograph images may be provided in different modes, e.g., A-Mode, B-Mode, Doppler, M-Mode, etc. The echocardiograph equipment 110 captures images using an associated mode and may capture multiple images of the patient's anatomy, e.g., the chest, and in particular the heart in the case of an echocardiography image, from a variety of different viewpoints to compile a medical imaging study 115 of the patient which may be stored in the medical image storage 140 of the automated echocardiograph measurement extraction system 100. The cognitive system 200 may determine which medical images, e.g., which viewpoints, still need to be captured in order to provide the missing measurements.); generating, by the imaging device, the captured image in response to the first user input applied at the imaging device (para. 20, 66, 69, 70, and 71 of Dehghan Marvast; a sonographer or other medical imaging subject matter expert (SME) may utilize a client computing device 210 to access the services and functionality provided by the cognitive system 200 and the medical image viewer/report generator application 230 to view medical images of one or more medical imaging studies stored in the corpus 240 for one or more patients and/or corresponding reports detailing the echocardiograph measurements automatically extracted from the medical images by the automated echocardiograph measurement extraction system 100. The user of the client computing device 210 may view the medical images and perform operations for annotating the medical images, adding notes to patient electronic medical records (EMRs), and any of a plethora of other operations that may be performed through human-computer interaction based on the human's viewing of the medical images via the cognitive system 200. The cognitive system 200 may determine which medical images, e.g., which viewpoints, still need to be captured in order to provide the missing measurements. In addition, once all measurements have been able to be extracted by the automated echocardiograph measurement extraction system 100, the cognitive system 200 may further determine that further medical image capture is unnecessary. The cognitive system 200 may provide indications of these various determinations to the human sonographer via the computing device 212. Thus, as the sonographer is conducting the medical imaging study of the patient, feedback is provided to the sonographer as to which medical image viewpoints still need to be captured, which measurements have been able to be extracted, and when the medical imaging study has been completed.); automatically transmitting, by the imaging device, the captured image to the user interface system in response to generating the captured image, wherein the captured image is medically related to the patient (Fig. 4, para. 3, 35, 74, and 93 of Dehghan Marvast; As shown in FIG. 4, the operation starts by receiving a plurality of medical images as part of a medical imaging study for extraction of echocardiograph measurements (step 410). The modes of the received medical images are determined (step 420) and the viewpoints of the medical images are classified (step 430). A subset of medical images is selected based on a selected mode and viewpoints, where the subset of medical images may have various different viewpoints (step 440). The selected medical images of the same mode, but with varying viewpoints, are input to a trained convolutional neural network (step 450) which operates on the medical image data to extract echocardiograph measurements from the medical images based on a learned association of medical image viewpoints and corresponding echocardiograph measurements (step 460). The extracted echocardiograph measurements are then output to a cognitive system and/or medical image viewer for reporting of the extracted echocardiograph measurements and/or performance of one or more cognitive operations (step 470). The medical image viewer application 230 provides the logic for rendering medical images such that a user may view the medical images and the corresponding annotations or labels based on the automatically extracted echocardiograph measurements, manipulate the view via a graphical user interface, and the like. The medical image viewer application 230 may comprise various types of graphical user interface elements for presenting medical image information to the user); automatically associating, by the user interface system, the captured image to the patient upon receipt (para. 35 & 66 of Dehghan Marvast; The medical imaging study 115 may be stored as one or more data structures, e.g., a separate data structure for each captured medical image, medical imaging study, or the like, in a medical image storage system 140 in association with an identifier of the patient and/or electronic medical records (EMRs) of the patient.); and storing, by the user interface system, the received captured image in the EMR associated with the patient (para. 35, 66, 74, and 78 of Dehghan Marvast; The medical imaging study 115 may be stored as one or more data structures, e.g., a separate data structure for each captured medical image, medical imaging study, or the like, in a medical image storage system 140 in association with an identifier of the patient and/or electronic medical records (EMRs) of the patient. The user of the client computing device 210 may view the medical images and perform operations for annotating the medical images, adding notes to patient electronic medical records (EMRs), and any of a plethora of other operations that may be performed through human-computer interaction based on the human's viewing of the medical images via the cognitive system 200.). Dehghan Marvast does not disclose the imaging device being configurable in a plurality of operation modes, including a code scanning mode and an image capture mode, wherein, in the code scanning mode, the imaging device is configured to detect a machine-readable code and output code information representative of the detected machine-readable code, and wherein, in the image capture mode, the imaging device is configured to output a captured image. Chiu discloses an imaging device being configurable in a plurality of operation modes, including a code scanning mode and an image capture mode, wherein, in the code scanning mode, the imaging device is configured to detect a machine-readable code and output code information representative of the detected machine-readable code, and wherein, in the image capture mode, the imaging device is configured to output a captured image (Fig. 1, para. 19, 20, 25, and 28-30 of Chiu; As shown in FIG. 1, image capture device 10 includes an image sensor 12, an image processor 14, and an image storage module 16. Image sensor 12 captures still images, or possibly full motion video sequences, in which case the integrated barcode scanning techniques may be performed on one or more image frames of the video sequence. A barcode scanner module 18 of image processor 14 determines whether the digital image of the scene of interest includes one or more barcodes while operating in a non-barcode scanning image capture mode. Although the integrated barcode scanning techniques of this disclosure may be used to detect barcodes in any non-barcode image capture mode, the techniques are described in the context of image capture device 10 operating in the default image capture mode.) Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to combine the aforementioned features of Chiu within Dehghan Marvast. The motivation for doing so would have been to improve quality (abstract of Chiu). (B) Referring to claim 27, Dehghan Marvast discloses wherein storing the received captured image in the EMR associated with the patient includes transmitting the received captured image to a data server that stores the EMR associated with the patient (para. 35, 36, 82, 91, and 93 of Dehghan Marvast). (C) Referring to claim 28, Dehghan Marvast discloses the method further comprising: terminating, by the user interface system, the image capture session in response to a second user input applied at the user interface system, wherein terminating the image capture session comprises: reconfiguring, by the user interface system, the imaging device into the first operation mode on the first condition that the first operation mode is not the image capture mode; and maintaining, by the user interface system, the imaging device in the image capture mode on the second condition that the first operation mode is the image capture mode (para. 35, 36, 38, 70, 71, and 93 of Dehghan Marvast). (D) Referring to claim 34, Dehghan Marvast discloses An electronic medical record (EMR) management system, comprising (see Figures 1A, 2, and 3 and para. 35, 36, and 61 of Dehghan Marvast): a data server configured to store a plurality of EMRs for a plurality of patients, wherein each EMR is associated with a different one of the plurality of patients (para. 29, 64, 66, 69, and 35 of Dehghan Marvast; The medical imaging study 115 may be stored as one or more data structures, e.g., a separate data structure for each captured medical image, medical imaging study, or the like, in a medical image storage system 140 in association with an identifier of the patient and/or electronic medical records (EMRs) of the patient. The user of the client computing device 210 may view the medical images and perform operations for annotating the medical images, adding notes to patient electronic medical records (EMRs), and any of a plethora of other operations that may be performed through human-computer interaction based on the human's viewing of the medical images via the cognitive system 200.); an imaging device having an image capture mode (para. 15 and 35 of Dehghan Marvast; The echocardiograph equipment 110 captures images using an associated mode and may capture multiple images of the patient's anatomy, e.g., the chest, and in particular the heart in the case of an echocardiography image, from a variety of different viewpoints to compile a medical imaging study 115 of the patient which may be stored in the medical image storage 140 of the automated echocardiograph measurement extraction system 100.); and an information management system communicatively coupled to the imaging device and to the data server (Fig. 2, para. 63 & 64 of Dehghan Marvast; FIG. 2 depicts a schematic diagram of one illustrative embodiment of a cognitive system 200 implementing a medical image viewer/report generator application 230 in a computer network 202, and which operates in conjunction with a normality classifier, such as normality classifier 100 in FIG. 1A, in accordance with one illustrative embodiment. The cognitive system 200 may further comprise various other types of cognitive operation logic for performing cognitive operations based on analysis of received medical image data and the automatic extraction of echocardiograph measurements from medical images of medical imaging studies having various viewpoints, in accordance with the operation of the automated echocardiograph measurement extraction system 100. The network 202 includes multiple computing devices 204A-D, which may operate as server computing devices, and 210-212 which may operate as client computing devices, in communication with each other and with other devices or components via one or more wired and/or wireless data communication links, where each communication link comprises one or more of wires, routers, switches, transmitters, receivers, or the like.); wherein when the imaging device is in the image capture mode, the imaging device is configured to capture the digital image and output the digital image to the information management system (para. 35 & 61 of Dehghan Marvast; The echocardiograph equipment 110 captures images using an associated mode and may capture multiple images of the patient's anatomy, e.g., the chest, and in particular the heart in the case of an echocardiography image, from a variety of different viewpoints to compile a medical imaging study 115 of the patient which may be stored in the medical image storage 140 of the automated echocardiograph measurement extraction system 100. The medical imaging study 115 may be stored as one or more data structures, e.g., a separate data structure for each captured medical image, medical imaging study, or the like, in a medical image storage system 140 in association with an identifier of the patient and/or electronic medical records (EMRs) of the patient. FIGS. 2-3 are directed to describing an example cognitive system for healthcare applications which implements a medical image viewer application 230 for viewing medical images and obtaining information about the medical images of particular patients. The cognitive system may also provide other cognitive functionality including treatment recommendations, patient electronic medical record (EMR) analysis and correlation with medical imaging data, intervention planning and scheduling operations, patient triage operations, and various other types of decision support functionality involving cognitive analysis and application of computer based artificial intelligence or cognitive logic to large volumes of data regarding patients, at least a portion of which involves the normality scoring mechanisms of the normality classifier.); and wherein when the information management system receives the captured image from the imaging device, the information management system automatically associates the captured image with the specific patient, including associating the captured image with an EMR of the specific patient, and upon associating the captured image with the EMR of the specific patient, automatically store the captured image in the EMR of the specific patient in the data server (para. 35, 64, 66, 67, 69, and 78 of Dehghan Marvast; The medical imaging study 115 may be stored as one or more data structures, e.g., a separate data structure for each captured medical image, medical imaging study, or the like, in a medical image storage system 140 in association with an identifier of the patient and/or electronic medical records (EMRs) of the patient. The user of the client computing device 210 may view the medical images and perform operations for annotating the medical images, adding notes to patient electronic medical records (EMRs), and any of a plethora of other operations that may be performed through human-computer interaction based on the human's viewing of the medical images via the cognitive system 200. The medical images captured may be provided to a storage system such as part of a corpus or corpora of electronic data, such as corpora 206 and/or 240. The medical image data may have associated metadata generated by the equipment and/or computing systems associated with the equipment, to provide further identifiers of characteristics of the medical image, e.g., DICOM tags, metadata specifying mode, viewpoint, or the like.). Dehghan Marvast does not disclose the imaging device having a code scanning mode; wherein when the imaging device is in the code scanning mode, the imaging device is configured to capture a digital image, identify a machine readable code in the captured image, generate code information representative of the machine readable code and output the code information to the information management system. Chiu discloses the imaging device having a code scanning mode; wherein when the imaging device is in the code scanning mode, the imaging device is configured to capture a digital image, identify a machine readable code in the captured image, generate code information representative of the machine readable code and output the code information to the information management system (Fig. 1, para. 25, 28-30, 32, 36, 40, and 41 of Chiu; As shown in FIG. 1, image capture device 10 includes an image sensor 12, an image processor 14, and an image storage module 16. Image sensor 12 captures still images, or possibly full motion video sequences, in which case the integrated barcode scanning techniques may be performed on one or more image frames of the video sequence. A barcode scanner module 18 of image processor 14 determines whether the digital image of the scene of interest includes one or more barcodes while operating in a non-barcode scanning image capture mode. Although the integrated barcode scanning techniques of this disclosure may be used to detect barcodes in any non-barcode image capture mode, the techniques are described in the context of image capture device 10 operating in the default image capture mode. Image processor 14 may store the captured image or at least the region of the captured image that includes the barcode in image storage module 16. Alternatively, image processor 14 may perform additional processing on the image and store either the entire image or the region containing the barcode in processed or encoded formats in image storage module 16. If the image information is accompanied by audio information, the audio information also may be stored in image storage module 16, either independently or in conjunction with video information comprising one or more frames containing the image information.) Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to combine the aforementioned features of Chiu within Dehghan Marvast. The motivation for doing so would have been to improve quality (abstract of Chiu). (E) Referring to claim 37, Dehghan Marvast discloses wherein when the imaging device is in the image capture mode, the imaging device is configured to output the digital image to the information management system without performing any code recognition or analysis (para. 35, 38, and 56 of Dehghan Marvast). (F) Referring to claim 38, Dehghan Marvast discloses wherein the information management system is configured to automatically store the captured image in the EMR of the specific patient in response to the information management system associating the captured image with the specific patient (para. 35, 36, 38, 66 and 78 of Dehghan Marvast). (G) Referring to claim 39, Dehghan Marvast does not disclose wherein the machine readable code comprises at least one selected from the group of: a barcode, a Quick Response (QR) code, and a radio-frequency identification (RFID) tag. Chiu discloses wherein the machine readable code comprises at least one selected from the group of: a barcode, a Quick Response (QR) code, and a radio-frequency identification (RFID) tag (para. 5 and abstract of Chiu). Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to combine the aforementioned feature of Chiu within Dehghan Marvast. The motivation for doing so would have been to improve quality (abstract of Chiu). Claim(s) 29 is/are rejected under 35 U.S.C. 103 as being unpatentable over Dehghan Marvast et al. (US 2019/0183366 A1) in view of Chiu et al. (US 2009/0212113 A1), and further in view of Mankovich et al. (US 2015/0235365 A1). (A) Referring to claim 29, Dehghan Marvast discloses wherein: initializing the image capture session in response to the first user input applied at the user interface system comprises launching an image capture graphical user interface (GUI) on a display device (para. 74 & 93 of Dehghan Marvast). Dehghan Marvast and Chiu do not expressly disclose terminating the image capture session further comprises closing, by the user interface system, the image capture GUI on the display device. Mankovich discloses terminating the image capture session further comprises closing, by the user interface system, the image capture GUI on the display device (para. 27 of Mankovich). Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to combine the aforementioned feature of Mankovich within Deghan Marvast and Chiu. The motivation for doing so would have been to better manage the user interface (para. 27 of Mankovich). Claim(s) 35 and 36 is/are rejected under 35 U.S.C. 103 as being unpatentable over Dehghan Marvast et al. (US 2019/0183366 A1) in view of Chiu et al. (US 2009/0212113 A1), and further in view of Neff (US 2014/0088983 A1). (A) Referring to claim 35, Dehghan Marvast and Chiu do not disclose wherein the when the information management system receives the code information from the imaging device, the information management system coverts to the code information into identification (ID) information corresponding to one selected from the group of: a patient ID, equipment ID, medication ID, medical fluid bag ID. Neff discloses wherein the when the information management system receives the code information from the imaging device, the information management system coverts to the code information into identification (ID) information corresponding to one selected from the group of: a patient ID, equipment ID, medication ID, medical fluid bag ID (para. 33 of Neff). Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to combine the aforementioned features of Neff within Dehghan Marvast and Chiu. The motivation for doing so would have been to store pertinent data (para. 19 of Neff). (B) Referring to claim 36, Dehghan Marvast and Chiu do not disclose wherein the information management system is configured to covert the code information into ID information by cross-referencing the code information with ID information stored by the data server. Neff discloses wherein the information management system is configured to covert the code information into ID information by cross-referencing the code information with ID information stored by the data server (para. 19, 25, 26, and 33 of Neff). Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to combine the aforementioned features of Neff within Dehghan Marvast and Chiu. The motivation for doing so would have been to store pertinent data (para. 19 of Neff). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. The cited but not applied prior art teaches a dual mode reader and method of reading DPM codes therewith (US 2020/0175236 A1); a medical image creating system, medical image creating method and display controlling program (US 2005/0222871 A1); a system and method for managing an endoscopic lab (US 2005/0075544 A1); and a method and apparatus for aiding imaging diagnosis using medical image, and image diagnosis aiding system for performing the method (US 2012/0166211 A1). Any inquiry concerning this communication or earlier communications from the examiner should be directed to LENA NAJARIAN whose telephone number is (571)272-7072. The examiner can normally be reached Monday - Friday 9:30 am-6 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Mamon Obeid can be reached at (571)270-1813. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /LENA NAJARIAN/Primary Examiner, Art Unit 3687
Read full office action

Prosecution Timeline

Apr 29, 2024
Application Filed
Jan 23, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12573489
INFUSION PUMP LINE CONFIRMATION
2y 5m to grant Granted Mar 10, 2026
Patent 12562247
PATIENT DATA MANAGEMENT PLATFORM
2y 5m to grant Granted Feb 24, 2026
Patent 12542208
ALERT NOTIFICATION DEVICE OF DENTAL PROCESSING MACHINE, ALERT NOTIFICATION SYSTEM, AND NON-TRANSITORY RECORDING MEDIUM STORING COMPUTER PROGRAM FOR ALERT NOTIFICATION
2y 5m to grant Granted Feb 03, 2026
Patent 12488880
Discovering Context-Specific Serial Health Trajectories
2y 5m to grant Granted Dec 02, 2025
Patent 12488894
SYSTEM AND METHODS FOR MACHINE LEARNING DRIVEN CONTOURING CARDIAC ULTRASOUND DATA
2y 5m to grant Granted Dec 02, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
38%
Grant Probability
78%
With Interview (+39.3%)
5y 0m
Median Time to Grant
Low
PTA Risk
Based on 464 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month