Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
Response to Arguments
Applicant’s arguments with respect to claim(s) 1-7 and 9-21 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-4, 6, 9, 11-13 and 15-20 are rejected under 35 U.S.C. 103 as being unpatentable over by Grantcharov et al. (US 2022/0270750) in view of Robaina et al. (US 2018/0197624).
Regarding claim 1, Grantcharov teaches a medical system (Figs. 1-14) comprising:
memory configured to store video data and audio data, the video data and audio data being captured during a medical procedure of a patient (at least Fig. 14 and paragraph 122 and 212 teaches recording/storing that stores video and audio data to create a session recording file for the medical procedure/event performed); and
processing circuitry communicatively coupled to the memory (Figs. 1, 11, paragraphs 202-206 teaches a system including an encoder to perform the following steps), the processing circuitry being configured to:
receive the video data from one or more first sensors at a first location (paragraph 122 teaches registering the video and audio into an encoded to a common format in a file (see paragraphs 216 and 244);
receive the audio data from one or more second sensors at the first location (Fig. 14 and paragraphs 245 teaches audio capture via microphones (paragraph 192)); and
register the video data with the audio data (at least Fig. 12, step 108 teaches wherein the various feeds (which are the feeds in Fig. 14 of video, audio, etc.) and paragraphs 122 and 212 teaches registering the video and audio into an encoded to a common format in a file (see paragraphs 216 and 244)).
However, while Grantcharov teaches a medical system a medical system aid, Robaina teaches an equivalent system meeting the claimed:
receive a wake-up word from one or more second sensors at the first location (paragraphs 222, 225, 229, 230, 239, 247 and 250 teaches the claimed);
in response to receiving the wake-up word, begin recording the audio data to the memory (paragraphs 222, 225, 229, 230, 239, 247 and 250 teaches the claimed).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the current application to incorporate the teachings of Robaina into Grantcharov such that the microphones on the device of Grantcharov “at a/the first location” is able to receive the wake-up and control the recording of the medical procedure because said incorporation allows for the benefit of improving the efficiency in a medical procedures by making some instructions hands-free (paragraph 238).
Regarding claim 2, Grantcharov teaches the claimed wherein as part of registering the video data with the audio data, the processing circuitry is further configured to:
apply one or more video timestamps to the video data; and apply one or more audio timestamps to the audio data (paragraphs 54, 93, 106, wherein timestamps are embedded with the real-time stream usable for playback purposes).
Regarding claim 3, Grantcharov teaches the claimed wherein the processing circuitry is further configured to present a user interface on a display, wherein the user interface comprises a plurality of drop-down menus, each of the plurality of drop-down menus having a limited number of selection options, the selection options being configured to categorize at least one of the medical procedure or a patient condition (paragraphs 45 teaches an intelligent dashboard that allows for procedure tagging. Additionally, paragraphs 56, 108, 136 and 217-219 teaches the user interface that allows for user interaction and input regarding the procedure or monitoring patient condition).
Regarding claim 4, Grantcharov teaches the claimed wherein the video data is first video data and wherein the processing circuitry is further configured to:
receive second video data from one or more third sensors (Paragraphs 122 and Fig. 14 teaches multiple camera feeds from additional video camera sensors);
register the second video data with the first video data (at least Fig. 12, step 108 teaches wherein the various feeds (which are the feeds in Fig. 14 of video, audio, etc.) and paragraphs 122 and 212 teaches registering the video and audio into an encoded to a common format in a file (see paragraphs 216 and 244). Multiple feeds are encoded together); and
overlay the first video data and the second video data on a display, wherein one or more third sensors are of a different type than the one or more first sensors (Fig. 14, multiple sources are fed into the recording session. First versus third sensors are met by the plurality of different types of cameras, room camera vs laparoscopic cameras, vs. types of cameras (endoscopic, x-ray, MRI etc.), etc.).
Regarding claim 6, Grantcharov teaches the claimed wherein the processing circuitry is further configured to:
execute a computer vision model to determine an intraprocedural event; and
based on determining the intraprocedural event, take at least one action (paragraphs 300-303 teaches learning wherein patterns are learned to detect errors/adverse events, which is equivalent to generating a reporting template when detected. Additionally, paragraphs 42 and 274-278 teaches a computer vision model to execute an action as claimed such as detecting events using computer vision modeling).
Regarding claim 9, Robaina teaches the claimed wherein the processing circuitry is further configured to:
determine that a predetermined time period from receiving the wake-up word has expired; and based on the determination that the predetermine time period from receiving the wake-up word has expired, stop recording the audio data to the memory (paragraphs 225-226 at least teaches the claimed periods of time after the words to start recording have passed, the recording stops. The recordings are stored into storage for future playback). The prior motivation as discussed above is incorporated herein.
Regarding claim 11, Grantcharov teaches the claimed wherein the processing circuitry is further configured to share at least one of the video data or the audio data with another person or a group of people (paragraph 277 at least teaches sharing the video data or audio data with computer systems to be ultimately shared with others after the video and audio are processed by the learning system).
Regarding claim 12, Grantcharov teaches the claimed wherein the processing circuitry is further configured to:
receive, via a user interface, performance information relating to the medical procedure; associate the performance information with one or more clinicians that performed the medical procedure; and store the performance information in the memory (paragraphs 76 wherein healthcare professionals’ performance is score for performance and associated with the healthcare professional for certification and/or re-certification purposes).
Regarding claim 13, Grantcharov teaches the claimed wherein the processing circuitry is further configured to:
upload at least one of the video data or the audio data to a server (paragraphs 276-277);
edit the video data to generate an excerpt of the video data (Figs. 24-26 teaches timeline wherein only portions of the video and audio is generated for review); and
store the excerpt of the video data in the memory (Figs. 24-26 teaches timeline wherein only portions of the video and audio is generated for review. The timeline portions/segments are stored in the system as in paragraph 54).
Claims 15 and 16 are rejected for the same reasons as discussed in claim 1 above, and furthermore, Grantcharov teaches the NTCRM in paragraphs 394 and 397.
Claims 17-19 are rejected for the same reasons as discussed in claims 2-4, respectively.
Claims 5 and 20-21 are rejected under 35 U.S.C. 103 as being unpatentable over by Grantcharov et al. (US 2022/0270750) in view of Robaina et al. (US 2018/0197624) and further in view of Wolf et al. (US 2020/0273548).
Regarding claims 5 and 20, Grantcharov teaches the claimed as discussed in claim 1 above, however fails to but Wolf teaches the claimed wherein the processing circuitry is further configured to:
execute a machine learning model to determine at least one of a patient condition or a type of the medical procedure (at least paragraphs 209, 247, 259 and 410 teaches various means by which the system is able to utilize a machine learning model to determine a patients characteristics including medical condition exhibited by the patient);
based on the at least one of the patient condition or the type of the medical procedure, automatically generate a report template, the report template relating to the at least one of the patient condition or the type of the medical condition and configured to be filled out by a clinician (paragraphs 410-416 teaches wherein the system can automatically populate a post-operative report and fill in data in accordance with procedure, surgery details, patient conditions, etc. to be utilized later); and
present a user interface on a display, the user interface comprising the report template (Fig. 23 wherein the report template is displayed on a display).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the current application to incorporate the teachings of Wolf into the proposed combination of Grantcharov and Robaina such that the medical procedure recorded may generate an automatic report template because such an incorporation allows for the benefit of reducing the time the healthcare/doctor/surgeon has to create a post report (as discussed in paragraphs 410-416).
Regarding claim 21, Wolf teaches the claimed wherein the processing circuitry is further configured to:
obtain clinician input filling out the report template (Figs. 23 and paragraph 410-419 teaches wherein the populated report form can be further altered by the healthcare provider or a user);
generate a completed report based on the report template and the clinician input (paragraph 410-419 and Figs. 23-25 results in the final report being generated); and
store the completed report in the memory (paragraph 410-419, especially 415 teaches storage of reports).
The prior motivation as discussed above is incorporated herein.
Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over by Grantcharov et al. (US 2022/0270750) in view of Robaina et al. (US 2018/0197624) and further in view of Lang et al. (US 2021/0192759).
Regarding claim 7, Grantcharov and Robaina teaches the claimed as discussed in claim 1 above, however fails to, but Lang teaches wherein the at least one action comprises at least one of: informing, via a user interface, a clinician to save a medical instrument; or (examiner notes the alternative language) generating a shipping label (paragraph 501 teaches a system wherein a video monitoring system is capable of automatically informing the user to generate a shipping label.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the current application to incorporate the teachings of Lang into the system of Grantcharov because said incorporation allows for the benefit of improving the system by automatically replacing used surgical inventory (paragraph 501).
Claim 10 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over by Grantcharov et al. (US 2022/0270750) in view of Robaina et al. (US 2018/0197624) and in view of Colley et al. (US 2021/0090694).
Regarding claim 10, Grantcharov teaches the claimed as discussed in claim 1 above, however fails to teach, but Colley teaches wherein the processing circuitry is further configured to:
execute a machine learning model to determine a patient condition (paragraph 1296 teaches MLA to analyze and determine a patient’s condition and disease state); and
present a user interface on a display, the user interface comprising the determined patient condition (at least Figs. 15-17 teaches wherein patient’s condition is presented on a user interface on a display).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the current application to incorporate the teachings of Colley into the system of Grantcharov because such an incorporation allows for the benefit of improving patient outcome by listing possible treatment a patient may be suited for (paragraph 1091).
Regarding claim 14, Grantcharov teaches the claimed as discussed in claim 1 above, however fails to teach, but Colley wherein the processing circuitry if further configured to:
execute a machine learning model to determine a patient condition (paragraph 1296 teaches MLA to analyze a patient);
based on the patient condition, generate a recommendation for a clinical trial for the patient (paragraph 1296 suggest clinical trial based on analytics); and
present the recommendation, via a user interface, to a clinician (paragraphs 41, 1091 and 1296 suggest clinical trials out of many available based on analytics).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the current application to incorporate the teachings of Colley into the system of Grantcharov because such an incorporation allows for the benefit of improving patient outcome by listing possible treatment a patient may be suited for (paragraph 1091).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to GELEK W TOPGYAL whose telephone number is (571)272-8891. The examiner can normally be reached M-F (9:30-6 PST).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William Vaughn can be reached at 571-272-3922. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/GELEK W TOPGYAL/Primary Examiner, Art Unit 2481