Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of claims
Elected Claims 1, 3, 6, 8- 9, 19, 23-24, 26-27, 33, 35, 37-38, 40, 43-45, 48, 50 have been reviewed and addressed below. Claims 2, 4-5, 7, 10-18, 20-22, 25, 28-32, 34, 36, 39, 41-42, 46-47, 49 has been Cancelled.
Claim Rejections - 35 USC § 102
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) ) 24, 26-27, 33, 35, 37-38, 40, 43-45, 48, 50 is/are rejected under 35 U.S.C. 102a2 as being Anticipate by Ebrahimi (2022/0187841) .
With respect to claim 24 Ebrahim teaches A digital health robot, comprising:
a. a housing configured for mobility, including a navigation system operable to move the robot and avoid obstacles using integrated sensors (Ebrahim paragraph 1140 “the robot comprises a LIDAR. In some embodiments, the LIDAR is encased in a housing. In some embodiments, the LIDAR housing includes a bumper to protect the LIDAR from damage. In some embodiments, the bumper operates in a similar manner as the bumper of the robot”);
b. a plurality of medical device systems integrated within the housing, each configured to collect patient diagnostic data during live medical examinations (Ebrahimi paragraph 1357 “medical robots may include blood pressure sensing device, heart rate monitor, heart pulse analyzer, blood oxygen sensor, retina scanner, breath analyzer, swab analysis”); and
c. a communication system configured to enable encrypted video conferencing between a healthcare provider and a patient and to transmit diagnostic data in real time to a remote healthcare provider dashboard (Ebrahimi paragraph 1144 “the processor of the robot may transmit image or video data captured by the camera of the robot for video conferencing while also displaying video conference participants on the touch screen display. The processor may use depth information collected by the same camera to maintain the position of the user in the middle of the frame of the camera seen by video conferencing participants”).
With respect to claim 26 Ebrahimi teaches the robot of claim 24, wherein the robot is equipped with obstacle detection sensors (Ebrahimi paragraph 1143).
With respect to claim 27 Ebrahimi teaches the robot of claim 24, wherein the integrated medical devices include one or more of pan-tilt-zoom (PTZ) exam cameras, digital stethoscopes, pulse oximeters, blood pressure monitors, digital dermo scopes, thermometers, weight scales, glucometers, spirometers, otoscopes, ultrasound devices, and digital ECG systems (Ebrahimi paragraph 792 “the robot may include, but is not limited to include, one or more of a casing, a chassis including a set of wheels, a motor to drive the wheels, a receiver that acquires signals transmitted from, for example, a transmitting beacon, a transmitter for transmitting signals, a processor, a memory storing instructions that when executed by the processor effectuates robotic operations, a controller, a plurality of sensors (e.g., tactile sensor, obstacle sensor, temperature sensor, imaging sensor, light detection and ranging (LIDAR) sensor, camera, depth sensor, time-of-flight (TOF) sensor, TSSP sensor, optical tracking sensor, sonar sensor, ultrasound sensor, laser sensor, light emitting diode (LED) sensor, etc.), network or wireless communications, radio frequency (RF) communications”).
With respect to claim 33 Ebrahimi teaches the robot of claim 24, wherein the AI module provides predictive maintenance alerts for the robot's medical devices and operational components (Ebrahimi paragraph 1356 and 1308).
With respect to claim 35 Ebrahimi teaches A method for performing remote healthcare using a remote healthcare platform comprising a digital health robot and a communications platform, the method comprising:
a. initiating an encrypted video conferencing session between a healthcare provider and a patient using the digital health robot (Ebrahimi paragraph 1144 “the processor of the robot may transmit image or video data captured by the camera of the robot for video conferencing while also displaying video conference participants on the touch screen display” and 1350 real time video feed);
b. remotely controlling the digital health robot to perform live medical examinations on the patient using integrated medical device systems (Ebrahimi paragraph 1344 “the medical care robot is analyzing the swab sample, again a progress bar 47002 is displayed to the user and an estimated time remaining”); and
c. transmitting real-time diagnostic data from the digital health robot to a healthcare provider dashboard for review and assessment by the healthcare provider (Ebrahimi paragraph 1341 “the medical care robot may include media capabilities for telecommunication with hospital staff, such as nurses and doctors, or other persons (e.g., technical support staff) 1339)”.
With respect to claim 37 Ebrahimi teaches the method of claim 35, wherein the medical examination includes utilizing one more integrated medical devices including pan-tilt-zoom (PTZ) exam cameras, digital stethoscopes, pulse oximeters, blood pressure monitors, digital dermo scopes, thermometers, weight scales, glucometers, spirometers, otoscopes, ultrasound devices, and digital ECG systems (Ebrahimi paragraph 1357 “medical robots may include blood pressure sensing device, heart rate monitor, heart pulse analyzer, blood oxygen sensor, retina scanner, breath analyzer, swab analysis”).
With respect to claim 38 Ebrahimi teaches the method of claim 35 wherein the transmitted diagnostic data is analyzed by an AI module to identify potential anomalies (Ebrahimi paragraph 1300).
With respect to claim 40 Ebrahimi teaches the method of claim 35 wherein the digital health robot is remotely controlled by the doctor during the examination (Ebrahimi paragraph 1341).
With respect to claim 43 Ebrahimi teaches the method of claim 35 wherein the AI module provides interactive guidance to the patient during the examination using natural language processing (Ebrahimi paragraph 1003).
With respect to claim 44 Ebrahimi teaches the method of claim 35 further comprising generating a structured consultation summary, including Subjective, Objective, Assessment, and Plan (SOAP) notes, using an AI module based on the diagnostic data and doctor-patient interaction (Ebrahimi paragraph 1118).
With respect to claim 45 Ebrahimi teaches the method of claim35, wherein the AI module dynamically suggests additional diagnostic tests based on real-time analysis of collected patient data (Ebrahimi paragraph 1341).
With respect to claim 48 Ebrahimi teaches the method of claim 35, wherein the AI module generates visualizations of diagnostic data, including graphs and charts, for the doctor's dashboard (Ebrahimi paragraph 731).
With respect to claim 50 Ebrahimi teaches the of claim 35, wherein the AI module automatically categorizes patient data into structured fields for the doctor's review and electronic medical record integration (Ebrahimi paragraph 1341).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 3, 6, 8- 9, 19, 23 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ebrahimi (2022/0187841) in view of Vu (2007/0192910).
With respect to claim 1 Ebrahimi teaches a digital health communication and diagnostic system, comprising:
a. at least one digital health robot (Ebrahimi paragraph 06) including
i. a mobile communication device configured to enable encrypted video conferencing between a healthcare provider and a patient (Ebrahimi paragraph 1341 “the medical care robot may include media capabilities for telecommunication with hospital staff, such as nurses and doctors, or other persons (e.g., technical support staff)”, 1415 “telepresence robot, an application may serve as an audio/video call transmitter”),
ii. a plurality of integrated medical device systems operable to perform live medical examinations (Ebrahimi paragraph 1341 “”), and
iii. a navigation system configured for remote control by the healthcare provider (Ebrahimi paragraph 1343 “an example of a medical care robot including a casing 46700, a sensor window 46701 behind which sensors for mapping and navigation”);
b. a medical communications platform facilitating remote operation of said at least one digital health robot (Ebrahimi paragraph 803 “he MCU reads data from sensors such as obstacle sensors or IR transmitters and receivers on the robot or a dock or a remote device, reads data from an odometer and/or encoder, reads data from a gyroscope and/or IMU, reads input data provided to a user interface, selects a mode of operation, automatically turns various components on and off or per user request, receives signals from remote or wireless devices and send output signals to remote or wireless devices using Wi-Fi, radio, etc., self-diagnoses the robot system, operates the PID controller, controls pulses to motors, controls voltage to motors, controls the robot battery and charging, controls the fan motor, sweep motor, etc., controls robot speed, and executes the coverage algorithm”) by said healthcare provider operable to
i. provide a health care provider dashboard displaying patient medical information and real-time data from the medical device systems (Ebrahimi paragraph 1342 “the medical care robot may include an interface (e.g., a touch screen) that may be used to input information, such as patient information, requested items, items provided to the medical care robot and following instructions for the items provided to the medical robot, etc. In some embodiments, the medical care robot may include media capabilities for telecommunication with hospital staff, such as nurses and doctors, or other persons (e.g., technical support staff). In some embodiments, the medical care robot may be remotely controlled using an application of a communication device. In some embodiments, patients may request medical care services or an appointment using an application of a communication device”), and
ii. facilitate secure data exchange between the healthcare provider and the patient(Ebrahimi paragraph 1342 “the medical care robot may include an interface (e.g., a touch screen) that may be used to input information, such as patient information, requested items, items provided to the medical care robot and following instructions for the items provided to the medical robot, etc. In some embodiments, the medical care robot may include media capabilities for telecommunication with hospital staff, such as nurses and doctors, or other persons (e.g., technical support staff). In some embodiments, the medical care robot may be remotely controlled using an application of a communication device. In some embodiments, patients may request medical care services or an appointment using an application of a communication device”); and
c. A host computing system administrating said network and interconnected with software applications executed on authorized computing devices (Ebrahimi paragraph 779 “the processor executes a decision-making loop comprising of observation and actuation in response to the observation”).
Ebrahimi does not teach wherein said at least one digital health robot is configurable to operate using distinct profiles, each profile having role-specific dashboards and administrative access levels.
Vu teaches mobile robot may also use a secondary system for confirming the identity of a person, such as by analyzing an infrared image or signature heat pattern corresponding to the person being sought, performing acoustic analysis to recognize the voice of the person, detecting an RFID or magnetic tag associated with the person or carried by the person, detecting a particular motion or a gesture, and/or performing image analysis on a video stream or still image frame generated by a camera to recognize facial features or memorized clothing sets typically worn by a person (Vu paragraph 246).
One of ordinary skill in the art at the time of filing would have found it obvious to combine the teachings of Ebrahimi with Vu with the motivation of assisting persons with various tasks (Vu paragraph 2).
Claim 3 Ebrahim in view of Vu teaches the system of Claim 1, wherein each digital health robot includes medical examination devices operable to transfer diagnostic data to a digital dashboard for remote healthcare delivery by doctors (Ebrahimi paragraph 771).
With respect to claim 6 Ebrahim in view of Vu teaches the system of claim 1, wherein the communications platform supports multi-specialty consultations by connecting multiple healthcare providers through separate digital communications devices to the same digital health robot session (Ebrahimi paragraph 1341).
With respect to claim 8 Ebrahim in view of Vu teaches the system of claim 1, further comprising a patient dashboard configured to display visit summaries and follow-up tasks through said at least one digital health robot (Ebrahimi paragraph 1244).
With respect to claim 9 Ebrahim in view of Vu teaches the system of claim 1, wherein the digital health robot includes voice- activated controls for patient interaction (Ebrahimi paragraph 1047).
With respect to claim 19 Ebrahim in view of Vu teaches the platform of claim 1, wherein the digital health robot includes an AI-powered module to process patient intake information through natural language processing (Ebrahimi paragraph 1003).
With respect to claim 23 Ebrahim in view of Vu teaches the platform of claim 1, wherein the AI module generates structured Subjective, Objective, Assessment, and Plan (SOAP) notes summarizing the consultation for the doctor's review (Ebrahimi paragraph 1118).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to REGINALD R REYES whose telephone number is (571)270-5212. The examiner can normally be reached 8:00-4:30 M-F.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Shahid R. Merchant can be reached at (571) 270-1360. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
REGINALD R. REYES
Primary Examiner
Art Unit 3684
/REGINALD R REYES/Primary Examiner, Art Unit 3684