Prosecution Insights
Last updated: April 19, 2026
Application No. 18/951,545

DIAGNOSTIC DEVICE FOR REMOTE CONSULTATIONS AND TELEMEDICINE

Non-Final OA §102§103
Filed
Nov 18, 2024
Examiner
WEBB, JESSICA MARIE
Art Unit
3683
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Medentum Innovations Inc.
OA Round
1 (Non-Final)
33%
Grant Probability
At Risk
1-2
OA Rounds
3y 0m
To Grant
86%
With Interview

Examiner Intelligence

Grants only 33% of cases
33%
Career Allow Rate
33 granted / 99 resolved
-18.7% vs TC avg
Strong +52% interview lift
Without
With
+52.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
21 currently pending
Career history
120
Total Applications
across all art units

Statute-Specific Performance

§101
33.6%
-6.4% vs TC avg
§103
34.3%
-5.7% vs TC avg
§102
5.1%
-34.9% vs TC avg
§112
23.3%
-16.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 99 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Response to Amendment In the preliminary amendment dated 1/29/2025, the following occurred: Claims 21-40 are new; and claims 1-20 are canceled. Claims 21-40 are pending and have been examined. Priority Acknowledgement is made of applicant’s claim to priority under 35 U.S.C. 120, 121, 365(c), or 386(c) and 37 CFR 1.78 for a continuing application, which claims priority to U.S. Application 17/215,044 filed 03/29/2021 (Patent No. 12,183,457), which claims priority to U.S. Provisional Patent Application No. 63/048,094 filed 07/04/2020. Information Disclosure Statement The Information Disclosure Statement(s) (IDS)(s) submitted on 02/05/2025 follow(s) the provisions of 37 CFR 1.97 and has/have been fully considered by the Examiner. Subject Matter Eligibility – 35 USC § 101 Claims 21, 28 and 36 recite the additional element of “[a] hand-held device comprising: a wireless transceiver; an endoscopic camera…” (claim 21 being representative). As a preliminary consideration, each component of the hand-held device performs a function and cannot be said to be generally linked. As for the functions, for example, the endoscopic camera obtains video data of a particular body part of a user (throat, mouth, nasal cavity, or ear). The endoscopic camera transmits the obtained video data to a user device for remote viewing of the current position of the endoscopic camera in relation to the body part. After the user re-positions the endoscopic camera, the endoscopic camera captures an image of the particular body part at that desired position. For at least this reason, independent claims 21, 28 and 36 recite additional elements that are sufficient to amount to significantly more than the judicial exception. Looking to the claims as a whole, the combination of additional elements aforementioned amounts to significantly more than the judicial exception itself. MPEP § 2106.05(I)(A). Therefore, claims 21, 28 and 36 (and their dependents) are subject matter eligible according the most recent version of the MPEP. Alternately, the independent claims 21, 28 and 36 each provide a practical application under subject eligibility analysis Step 2A Prong 2 because the judicial exception is implemented with a particular machine of a hand-held device limited to include certain structure including an endoscopic camera. As explained in the MPEP, the evaluation of Prong Two requires the use of the considerations (e.g., implementing with a particular machine) identified by the Supreme Court and the Federal Circuit, to ensure that the claim as a whole “integrates [the] judicial exception into a practical application [that] will apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception, such that the claim is more than a drafting effort designed to monopolize the judicial exception” (84 Fed. Reg. at 53). In the independent claims, the endoscopic camera obtains video data of a particular body part of a user (throat, mouth, nasal cavity, or ear). The endoscopic camera transmits the obtained video data to a user device for remote viewing of the current position of the endoscopic camera in relation to the body part. After the user re-positions the endoscopic camera, the endoscopic camera captures an image of the particular body part at that desired position. The application of the judicial exception by or with the endoscopic camera of hand-held diagnostic device is made integral to achieve performance of the method. See MPEP 2106.05(b). Therefore, the claimed invention is subject matter eligible according the most recent version of the MPEP. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 21, 25-26, 28, 31 and 35 are rejected under 35 U.S.C. §§ 102(a)(1) and 102(a)(2) as being anticipated by Sagalovich et al. (US 2018/0344160 A1; “Sagalovich” herein). Re. Claim 21, Sagalovich teaches a hand-held device (100) comprising: a wireless transceiver (123) (Fig. 25, [0140]-[0141] teach an apparatus 100, in an embodiment, is configured to wirelessly communicate… via a connector interface 123.); an endoscopic camera (145) (see Fig. 30 and [0141]); a stethoscope (134) (see Fig. 29A and [0141]); one or more processors (101, 2004, 2204); and memory operatively connected to the one or more processors and including instructions (2501-2503, 2207), that when executed, cause the hand-held device to (Fig. 3, [0214] teach the diagnosis unit 101 comprising the microcontroller 2004 is configured to execute computer program instructions defined by modules 2501-2503. See also [0157], [0200], [0202].): obtain, in real time and via the endoscopic camera, video data of a user's throat, mouth, nasal cavity, or ear ([0082] teaches, in an embodiment, the power control trigger element 103 and the action control trigger element 104 can be used, for example, for activating or deactivating the image capture device 106, for activating or deactivating a medical diagnostic device, e.g., an endoscope device, operably connected to the diagnosis control unit 101 for controlling an image capture and storage operation of the image capture device 106, for recording, transferring, or deleting diagnostic image data… via the image capture device 106 and/or the medical diagnostic device. [0090] teaches recording a video or capturing an image of the patient’s ear canal and tympanic membrane… and storing the recorded video or the captured image in the local memory of the image capture device 106 prior to transmission.); transmit, in real time and via the wireless transceiver, the obtained video data to a user device for remote real time viewing of the obtained video data by the user for positioning of the endoscopic camera by the user relative to the user's throat, mouth, nasal cavity, or ear (see previous citations. Fig. 25, [0090], [0097] teach transmission to a medical diagnostic examination system 2506, e.g., of a remote user device 2511 or a local user device 2505. Fig. 25, [0097], [0143] teach the illuminated and indicated anatomical examination areas communicated to system 2506 are viewed locally on a local user device 2505 or remotely on a remote user device 2511 via the communication network for the remote diagnostic examinations.); after the positioning of the endoscopic camera by the user, obtain, in real time and via the endoscopic camera, an image of the user's throat, mouth, nasal cavity, or ear (The Examiner interprets the user as positioning the endoscope device based on the transmission of the anatomical area. [0082], [0090] teach elements 103-104 can be used for activating the endoscope device operably coupled to the diagnostic control unit 101 for controlling the capture of an image created from the real time video output of the patient’s ear.); transmit, via the wireless transceiver, the obtained image to the user device for remote real time viewing of the obtained image by the user (Fig. 25, [0097], [0143] teach the connector interface 123 receives and transmits the processed diagnostic image data of the patient’s ear from the camera module 124 to the medical diagnostic examination system 2506 of the local user device 2505 or the remote user device 2511.); obtain, via the stethoscope, auscultation data of the user ([0107] teaches using the stethoscope device 134 with the apparatus 100; and the stethoscope device 134 communicates the diagnostic acoustic data via the connector interface 123 to the medical examination system 2506 on the local user device 2505 (transmit the auscultation data obtained).); and transmit concurrently with the image, via the wireless transceiver, the obtained auscultation data to the user device ([0082] teaches elements 103-104 can be used for activating the stethoscope device or the endoscope device… for controlling an image capture and storage operation of the image capture device 106, for recording or transferring diagnostic image data and diagnostic examination data via the image capture device 106 and/or the medical diagnostic device. [0110] teaches the diagnostic examination data includes diagnostic acoustic data recorded by the stethoscope device 134. The Examiner notes that the recited transmitting “concurrently” encompasses Sagalovich’s disclosed transferring both image data and diagnostic acoustic data.) Re. Claim 25, Sagalovich teaches the hand-held device of claim 21, further comprising one or more of a microphone, an EKG sensor, or a blood pressure sensor (Figs. 10A, 22 and [0106] teach the stethoscope device 134 comprises an array of piezoelectric microphones 2201.) Re. Claim 26, Sagalovich teaches the hand-held device of claim 21, wherein the auscultation data includes sounds from one or more of a heart, lungs, or a stomach of the user ([0106] teaches the piezoelectric microphones 2201 of the stethoscope device 134 convert internal sounds from the patient's body to electrical signals that are transmitted to the diagnosis control unit 101 through the spring contact connectors 118 connected to the connector slot 112 of the diagnosis control unit 101 exemplarily illustrated in FIG. 1B. See also [0107]. The Examiner interprets the medical assistant as examining the patient’s lungs.) Re. Claim 28, Sagalovich teaches a system comprising: a hand-held device (100) in communication with a user device (2505, 2511) (see Fig. 25) and including: a wireless transceiver (123) (Fig. 25, [0140]-[0141] teach an apparatus 100, in an embodiment, is configured to wirelessly communicate… via a connector interface 123.); an endoscopic camera (145) (see Fig. 30 and [0141]); one or more processors (101, 2004, 2204); and memory operatively connected to the one or more processors of the hand-held device and including instructions (2501-2503, 2207), that when executed, cause the hand-held device to (Fig. 3, [0214] teach the diagnosis unit 101 comprising the microcontroller 2004 is configured to execute computer program instructions defined by modules 2501-2503. See also [0157], [0200], [0202].): obtain, in real time and via the endoscopic camera, video data of a user's throat, mouth, nasal cavity, or ear ([0082] teaches, in an embodiment, the power control trigger element 103 and the action control trigger element 104 can be used, for example, for activating or deactivating the image capture device 106, for activating or deactivating a medical diagnostic device, e.g., an endoscope device, operably connected to the diagnosis control unit 101 for controlling an image capture and storage operation of the image capture device 106, for recording, transferring, or deleting diagnostic image data… via the image capture device 106 and/or the medical diagnostic device. [0090] teaches recording a video or capturing an image of the patient’s ear canal and tympanic membrane… and storing the recorded video or the captured image in the local memory of the image capture device 106 prior to transmission.), transmit, in real time and via the wireless transceiver, the obtained video data to the user device for remote real time viewing of the obtained video data by the user for positioning of the endoscopic camera by the user relative to the user's throat, mouth, nasal cavity, or ear (see previous citations. Fig. 25, [0090], [0097] teach transmission to a medical diagnostic examination system 2506, e.g., of a remote user device 2511 or a local user device 2505. Fig. 25, [0097], [0143] teach the illuminated and indicated anatomical examination areas communicated to system 2506 are viewed locally on a local user device 2505 or remotely on a remote user device 2511 via the communication network for the remote diagnostic examinations.), after the positioning of the endoscopic camera by the user, obtain, in real time and via the endoscopic camera, an image of the user's throat, mouth, nasal cavity, or ear (The Examiner interprets the user as positioning the endoscope device based on the transmission of the anatomical area. [0082], [0090] teach elements 103-104 can be used for activating the endoscope device operably coupled to the diagnostic control unit 101 for controlling the capture of an image created from the real time video output of the patient’s ear.), and transmit, via the wireless transceiver, the obtained image to the user device for remote real time viewing of the obtained image by the user (Fig. 25, [0097], [0143] teach the connector interface 123 receives and transmits the processed diagnostic image data of the patient’s ear from the camera module 124 to the medical diagnostic examination system 2506 of the local user device 2505 or the remote user device 2511.); and the user device (2505) (Fig. 25, [0081] teach a medical diagnostic examination system 2506 that is configured as an application software on a local user device.), the user device including: a display (2606) (Fig. 25, [0147], [0229] teach a display unit 2606, via a graphical diagnostic examination interface (GDEI) 2507, displays information, display interfaces, user interface elements, etc.); one or more processors (2601) (see Fig. 26, [0219], [0226]); and memory (2602) operatively connected to the one or more processors of the user device and including instructions, that when executed, cause the user device to (Fig. 26, [0219] teach the medical diagnostic examination system 2506 comprises a non-transitory computer readable storage medium storing computer program instructions; and the at least one processor is configured to execute the instructions.): control the display to visually display the video data transmitted from the hand-held device for real time viewing by the user (Fig. 25, [0159] teach the microcontroller 2004 (of the hand-held device) sends 1819 a command to the medical diagnostic examination system 2506 (of local user device 2505) to transfer the diagnostic image data to a data management server 2510 for storage, to another local user device via the connector interface 123, or to a remote user device 2511 over a communication network 2509 exemplarily illustrated in FIG. 25, for facilitating medical imaging and remote diagnostic examination. Fig. 30, [0147], [0254] teach a local nurse and a remote doctor view the anatomical examination areas on separate screens via screen sharing (displaying the video data).) Re. Claim 31, Sagalovich teaches the system of claim 28, wherein: the hand-held device further includes a stethoscope (134) (see Fig. 29A and [0141]); and the instructions of the memory of the hand-held device further cause the hand-held device to: obtain, via the stethoscope, auscultation data of the user ([0107] teaches using the stethoscope device 134 with the apparatus 100; and the stethoscope device 134 communicates the diagnostic acoustic data via the connector interface 123 to the medical examination system 2506 on the local user device 2505 (transmit the auscultation data obtained).); and transmit concurrently with the image, via the wireless transceiver, the obtained auscultation data to the user device ([0082] teaches elements 103-104 can be used for activating the stethoscope device or the endoscope device… for controlling an image capture and storage operation of the image capture device 106, for recording or transferring diagnostic image data and diagnostic examination data via the image capture device 106 and/or the medical diagnostic device. [0110] teaches the diagnostic examination data includes diagnostic acoustic data recorded by the stethoscope device 134. The Examiner notes that the recited transmitting “concurrently” encompasses Sagalovich’s disclosed transferring both image data and diagnostic acoustic data.) Re. Claim 35, Sagalovich teaches the system of claim 28, wherein the instructions of the memory of the user device further cause the user device to: receive the image from the hand-held device (Fig. 25, [0097], [0143] teach the connector interface 123 receives and transmits the processed diagnostic image data (i.e., the captured image) of the patient’s ear from the camera module 124 to the medical diagnostic examination system 2506 of the local user device 2505 or the remote user device 2511 (necessarily received)); transmit the received image to a remote user device for generation of a clinical assessment of the user by a physician based on the transmitted image ([0147] teaches the medical assistant shares the captured screenshot(s) with a doctor at the remote location in real time via an audio/video conference set up between the local user device 2505 and the remote user device 2511.); and receive the generated clinical assessment from the remote user device ([0146] teaches the medical diagnostic examination system 2506 on the local user device 2505 is in communication with a remote user device 2511 over the communication network 2509 to facilitate remote viewing, remote selection, and remote diagnostic examination. The Examiner interprets the doctor as remotely delivering a clinical assessment based on the remote viewing and/or the shared information.) Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 22 and 34 are rejected under 35 U.S.C. § 103 as being unpatentable over Sagalovich in view of Rajasekhar et al. (US 2020/0135334 A1; “Rajasekhar” herein). Re. Claim 22, Sagalovich teaches the hand-held device of claim 21, wherein the instructions further cause the hand-held device to: […] the obtained image and the obtained auscultation data […] (see claim 21 prior art rejection). Sagalovich does not explicitly teach input the obtained image and the obtained auscultation data into an artificial intelligence model to generate a clinical assessment of the user based on the inputted image and the inputted auscultation data; and determine whether to suggest medical attention to the user based on the generated clinical assessment. Rajasekhar teaches input… and… into an artificial intelligence model to generate a clinical assessment of the user based on the inputted… and the inputted…; and determine whether to suggest medical attention to the user based on the generated clinical assessment ([0044], [0213] teach patient data may be obtained remotely… and analyzed to assess patient condition, e.g., predict risk or likelihood of an upcoming clinical event… Patient assessments (necessarily generated), corresponding to a patient’s likelihood of requiring medical attention, may trigger one or more alerts or action items (determining). [0088], [0130] teach the patient data may be analyzed using a suitable trained machine learning model, and an output may be created as an assessment of patient risk. See also Abstract and [0169], [0179], [0214].) Therefore, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date to have modified the multipurpose diagnostic examination apparatus and system of Sagalovich to utilize data input for machine learning analysis and reporting and to use this information as part of devices and methods for remotely managing chronic medical conditions as taught by Rajasekhar, with the motivation of improving patient care (monitoring, management, costs, etc.) (see Rajasekhar at Abstract and para. 0002-0007). Re. Claim 34, Sagalovich teaches the system of claim 28, wherein the instructions of the memory of the user device further cause the user device to: receive the image from the hand-held device (Fig. 25, [0097], [0143] teach the connector interface 123 receives and transmits the processed diagnostic image data (i.e., the captured image) of the patient’s ear from the camera module 124 to the medical diagnostic examination system 2506 of the local user device 2505 or the remote user device 2511 (necessarily received).); […] the received image […] based on the received image; and […] (see claim 28 prior art rejection). Sagalovich does not explicitly teach input the received image into an artificial intelligence model to generate a clinical assessment of the user based on the received image; and determine whether to suggest medical attention to the user based on the generated clinical assessment. Rajasekhar teaches input… into an artificial intelligence model to generate a clinical assessment of the user based on…; and determine whether to suggest medical attention to the user based on the generated clinical assessment. ([0044], [0213] teach patient data may be obtained remotely… and analyzed to assess patient condition, e.g., predict risk or likelihood of an upcoming clinical event… Patient assessments (necessarily generated), corresponding to a patient’s likelihood of requiring medical attention, may trigger one or more alerts or action items (determining). [0088], [0130] teach the patient data may be analyzed using a suitable trained machine learning model, and an output may be created as an assessment of patient risk. See also Abstract and [0169], [0179], [0214].) Therefore, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date to have modified the multipurpose diagnostic examination apparatus and system of Sagalovich to utilize data input for machine learning analysis and reporting and to use this information as part of devices and methods for remotely managing chronic medical conditions as taught by Rajasekhar, with the motivation of improving patient care (monitoring, management, costs, etc.) (see Rajasekhar at Abstract and para. 0002-0007). Claim 27 is rejected under 35 U.S.C. § 103 as being unpatentable over Sagalovich in view of Castro et al. (2012) (“MARVEL: A wireless Miniature Anchored Robotic Videoscope for Expedited Laparoscopy”). Re. Claim 27, Sagalovich teaches the hand-held device of claim 21, wherein the endoscopic camera […]. Sagalovich may not teach the endoscopic camera images at a focal distance in a range of 20 mm to 100 mm. Castro teaches the endoscopic camera images at a focal distance in a range of 20 mm to 100 mm (pg. 2928, section C. “Vision Subsystem” teaches the vision subsystem includes the lens, the lens holder, and the video image sensor… A Sunex DSL944C lens with an image format of 1/2.5" and focal length of 7.5mm is used to obtain focal distances between 30mm and 80mm.) It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date to combine the noted feature of a lens with the given focal length achieving the given focal distances with teachings of Sagalovich, since the combination of the two references is merely combining prior art elements according to known methods to yield predictable results (KSR rational A). It can be seen that each element claimed is present in either Sagalovich or Castro. Providing lens technology (as taught by Castro) does not change or affect the normal imaging-related functionality of the multipurpose diagnostic examination apparatus and system of Sagalovich. Acquiring patient examination-related image data for a networked computer environment would be performed the same way even with the addition of the lens technology. Since the functionalities of the elements in Sagalovich and Castro do not interfere with each other, the results of the combination would be predictable. Alternately, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date to combine the noted feature of a lens with the given focal length with teachings of Sagalovich, since known work in one field of endeavor may prompt variations in design in either the same field or a different field based on design incentives or other market forces if the variations would have been predictable to one of ordinary skill in the art (KSR rationale F). One of ordinary skill in the art of laparoscopy or general optics would have found it obvious to update the endoscope device of the primary reference Sagalovich using a modern lens component as found in the secondary reference Castro to gain commonly understood benefits of such an adaptation (e.g. focal distances between 30 mm and 80 mm). This update would be accomplished with no unpredictable results. Claims 36-37 and 40 are rejected under 35 U.S.C. § 103 as being unpatentable over Sagalovich in view of Wolf et al. (US 2020/0273581 A1; “Wolf” herein). Re. Claim 36, Sagalovich teaches a method of obtaining a clinical assessment of a user by a user device (2505, 2511) in communication with a hand-held device (100), the method comprising: obtaining, in real time and via an endoscopic camera of the hand-held device, video data of the user's throat, mouth, nasal cavity, or ear ([0082] teaches, in an embodiment, the power control trigger element 103 and the action control trigger element 104 can be used, for example, for activating or deactivating the image capture device 106, for activating or deactivating a medical diagnostic device, e.g., an endoscope device (see Fig. 30 and [0141]), operably connected to the diagnosis control unit 101 for controlling an image capture and storage operation of the image capture device 106, for recording, transferring, or deleting diagnostic image data… via the image capture device 106 and/or the medical diagnostic device. [0090] teaches recording a video or capturing an image of the patient’s ear canal and tympanic membrane… and storing the recorded video or the captured image in the local memory of the image capture device 106 prior to transmission.); transmitting, in real time and via a wireless transceiver of the hand-held device, the video data to the user device for remote real time viewing of the transmitted video data by the user for positioning of the endoscopic camera by the user relative to the user's throat, mouth, nasal cavity, or ear (see previous citations. Fig. 25, [0090], [0097] teach transmission to a medical diagnostic examination system 2506, e.g., of a remote user device 2511 or a local user device 2505. Fig. 25, [0097], [0143] teach the illuminated and indicated anatomical examination areas communicated to system 2506 are viewed locally on a local user device 2505 or remotely on a remote user device 2511 via the communication network for the remote diagnostic examinations.); after the positioning of the endoscopic camera by the user, obtaining, in real time and via the endoscopic camera, an image of the user's throat, mouth, nasal cavity, or ear (The Examiner interprets the user as positioning the endoscope device based on the transmission of the anatomical area. [0082], [0090] teach elements 103-104 can be used for activating the endoscope device operably coupled to the diagnostic control unit 101 for controlling the capture of an image created from the real time video output of the patient’s ear.); transmitting, via the wireless transceiver, the obtained image to the user device for remote real time viewing of the obtained image by the user (Fig. 25, [0097], [0143] teach the connector interface 123 receives and transmits the processed diagnostic image data of the patient’s ear from the camera module 124 to the medical diagnostic examination system 2506 of the local user device 2505 or the remote user device 2511.); receiving, by the user device, the transmitted image as a first set of diagnostic information (Fig. 25, [0097], [0143] teach the connector interface 123 receives and transmits the processed diagnostic image data (i.e., the captured image) of the patient’s ear from the camera module 124 to the medical diagnostic examination system 2506 of the local user device 2505 or the remote user device 2511.); obtaining, by the user device, a second set of diagnostic information comprising one or more of the user's symptoms, medical history, or demographics ([0229] teaches using the input devices 2607 to provide inputs to the medical diagnostic examination system 2506 (by the user device), for example, a patient’s personal information, a time of medically examining the patient, a name for an audio file. Fig. 25, [0241] teach the created audio file can be used for creating a medical history record for the patient; and system 2506 stores the created audio file in the data management server 2510.); […] (see previous citations). Sagalovich does not teach inputting, by the user device, the first set of diagnostic information and the second set of diagnostic information into an artificial intelligence model for generating the clinical assessment based on the first set of diagnostic information and the second set of diagnostic information; and obtaining, by the user device, the clinical assessment from the artificial intelligence model. Wolf teaches inputting… into an artificial intelligence model for generating the clinical assessment… and obtaining… the clinical assessment from the artificial intelligence model ([0288] teaches determining a first surgical complexity level (a clinical assessment) may be based on an analysis of an electronic medical record including information regarding a medical history of the patient… details about medical assessment, medical images, and so forth. [0296] teaches a process 1300 may include analyzing the first set of frames using the first historical data and using the identified anatomical structure to determine the first surgical complexity level associated with the first set of frames… For example, a machine learning model may be trained using training data (for example, training data based on the historical data based on an analysis of frame data captured from prior surgical procedures) (the datasets necessarily input) to identify surgical complexity level associated with a set of frames (obtaining), and the trained machine learning model may be used to analyze the first set of frames to determine a first surgical complexity level associated with the first set of frames (also obtaining). [0017], [0258] teach generating surgical videos including sets of frames tagged with their respective complexity level.) Therefore, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date to have modified the multipurpose diagnostic examination apparatus and system of Sagalovich to utilize data input for machine learning analysis and reporting and to use this information as part of systems and methods for predicting post-discharge risk as taught by Wolf, with the motivations of improving surgical preparation with risk reduction and improving decision-making support provided in the form of tagged video clips (see Wolf at Abstract and para. 0003-0004, 0123, 0126, 0258). Re. CLAIM 37, Sagalovich/Wolf teaches the method of claim 36, further comprising: obtaining, via a stethoscope of the hand-held device (134) (see Sagalovich Fig. 29A and [0141]), auscultation data of the user (Sagalovich [0107] teaches using the stethoscope device 134 with the apparatus 100; and the stethoscope device 134 communicates the diagnostic acoustic data via the connector interface 123 to the medical examination system 2506 on the local user device 2505 (transmit the auscultation data obtained).); and transmitting concurrently with the image, via the wireless transceiver, the obtained auscultation data to the user device; and wherein the first set of diagnostic information includes the transmitted auscultation data (Sagalovich [0082] teaches elements 103-104 can be used for activating the stethoscope device or the endoscope device… for controlling an image capture and storage operation of the image capture device 106, for recording or transferring diagnostic image data and diagnostic examination data (the first set) via the image capture device 106 and/or the medical diagnostic device. Sagalovich [0110] teaches the diagnostic examination data includes diagnostic acoustic data recorded by the stethoscope device 134. The Examiner notes that the recited transmitting “concurrently” encompasses Sagalovich’s disclosed transferring both image data and diagnostic acoustic data.) Re. Claim 40, Sagalovich/Wolf teaches the method of claim 36, further comprising transmitting, by the user device, the first set of diagnostic information and the second set of diagnostic information to a remote user device for remote clinical assessment of the user by a physician based on the transmitted first set of diagnostic information and the transmitted second set of diagnostic information (Sagalovich Fig. 25, [0159], [0215] teach the microcontroller 2004 sends 1819 a command to the medical diagnostic examination system 2506 (of the user device) to transfer the diagnostic image data (the first set) to a remote user device 2511 for facilitating the medical imaging and the remote diagnostic examination. Sagalovich Fig. 25, [0241] teach the created audio file can be used for creating a medical history record for the patient (the second set); and storing the created audio file in the data management server 2510. Sagalovich Fig. 25, [0146], [0212], [0254] teach the remote user device 2511 can also receive and view the diagnostic examination data / the diagnostic reports of the patient.) Claim 39 is rejected under 35 U.S.C. § 103 as being unpatentable over Sagalovich in view of Wolf and Rajasekhar. Re. Claim 39, Sagalovich/Wolf teaches the method of claim 36, further comprising […] by the user device […] (see claim 36 prior art rejection). Sagalovich/Wolf may not teach determining, by the user device, whether to suggest medical attention to the user based on the obtained clinical assessment. Rajasekhar teaches determining… whether to suggest medical attention to the user based on the obtained clinical assessment ([0044], [0213] teach patient data may be obtained remotely… and analyzed to assess patient condition, e.g., predict risk or likelihood of an upcoming clinical event… Patient assessments (necessarily obtained), corresponding to a patient’s likelihood of requiring medical attention, may trigger one or more alerts or action items (determining). Note that [0088], [0130] teach the patient data may be analyzed using a suitable trained machine learning model, and an output may be created as an assessment of patient risk. See also Abstract and [0169], [0179], [0214].) Therefore, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date to have modified the multipurpose diagnostic examination apparatus and system of Sagalovich/Wolf to utilize data input for machine learning analysis and reporting and to use this information as part of devices and methods for remotely managing chronic medical conditions as taught by Rajasekhar, with the motivation of improving patient care (monitoring, management, costs, etc.) (see Rajasekhar at Abstract and para. 0002-0007). Subject Matter Free of Prior Art The cited prior art of record fails to expressly teach or suggest, either alone or in combination, the features found within dependent claim(s) 23, 32 and 38 as follows (claim 23 being representative): The hand-held device of claim 21, further comprising a temperature sensor for no-contact body temperature measurements of the user. The most remarkable prior art of record is as follows: Sagalovich for teachings above-cited (see at least Fig. 1A). Wolf for teachings above-cited; additionally for teaching temperature sensors may include infrared cameras (e.g., an infrared camera 117) for thermal imaging. Infrared camera 117 may allow measurements of the surface temperature of an anatomic structure at different points of the structure (see Fig. 1 and para. 0097). The Examiner notes that this is a no-contact body temperature sensor. The cited prior art of record fails to expressly teach or suggest, either alone or in combination, the features found within dependent claim(s) 24 and 33 as follows (claim 24 being representative): The hand-held device of claim 21, further comprising a pulse oximeter for measuring an oxygen saturation of the user and a heart rate of the user. The most remarkable prior art of record is as follows: Sagalovich for teachings above-cited (see at least Fig. 1A). Boucher et al. (US 2018/0353073 A1) for teaching diagnostic device 18 and diagnostic processing components communicating with third party diagnostic devices, e.g., pulse oximeters (see at least Fig. 1 and para. 0395). The cited prior art of record fails to expressly teach or suggest, either alone or in combination, the features found within dependent claim(s) 29 (and claim 30 that depends on claim 29) as follows: obtain input data indicating whether to provide the first set of diagnostic information and the second set of diagnostic information to generate a clinical assessment of the user by a second user of a remote user device or to generate the clinical assessment by an artificial intelligence model. The most remarkable prior art of record is as follows: Sagalovich for teachings above-cited (see at least Fig. 1A and para. 0146-0147). Rajasekhar for teachings above-cited (see at least para. 0044, 0088, 0130, 0213-0214). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Jessica M Webb whose telephone number is (469)295-9173. The examiner can normally be reached Mon-Fri 9:00am-1:00pm CST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Robert Morgan can be reached on (571) 272-6773. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /J.M.W./Examiner, Art Unit 3683 /CHRISTOPHER L GILLIGAN/Primary Examiner, Art Unit 3683
Read full office action

Prosecution Timeline

Nov 18, 2024
Application Filed
Feb 18, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585721
SINGLE BARCODE SCAN CAST SYSTEM FOR PHARMACEUTICAL PRODUCTS
2y 5m to grant Granted Mar 24, 2026
Patent 12525336
INTELLIGENT MEDICAL ASSESSMENT AND COMMUNICATION SYSTEM WITH ARTIFICIAL INTELLIGENCE
2y 5m to grant Granted Jan 13, 2026
Patent 12394505
ELECTRONIC HEALTH RECORD INTEROPERABILITY TOOL
2y 5m to grant Granted Aug 19, 2025
Patent 12347541
CAREGIVER SYSTEM AND METHOD FOR INTERFACING WITH AND CONTROLLING A MEDICATION DISPENSING DEVICE
2y 5m to grant Granted Jul 01, 2025
Patent 12293001
REFERENTIAL DATA GROUPING AND TOKENIZATION FOR LONGITUDINAL USE OF DE-IDENTIFIED DATA
2y 5m to grant Granted May 06, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
33%
Grant Probability
86%
With Interview (+52.5%)
3y 0m
Median Time to Grant
Low
PTA Risk
Based on 99 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month