DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on November 14, 2024 is in compliance with the provisions of 37 CFR 1.97, and has been considered by the examiner.
Notice to Applicant Regarding Patent Eligibility under 35 U.S.C. § 101
The inventive concept in the claimed invention is similar to the claims in Patent No. US 11,830,614, issued on November 28, 2023, and Patent No. US 12,191,012, issued on January 7, 2025. The claimed invention was analyzed under § 101 and is deemed to be eligible for similar reasons. Namely, while the present invention may be interpreted as being directed toward an abstract idea in the Certain Methods of Organizing Human Activity category which, under its broadest reasonable interpretation, covers concepts related to managing personal behavior or relationships or interactions between people (i.e., a method of collecting and displaying information by medical professionals during a medical procedure), the additional elements provide an improvement to the functioning of a computer, or to any other technology or technical field.
Similar to the claim limitations in the DDR Holdings, LLC v. Hotels.com, L.P. case (see MPEP § 2106.05(a), (b), (c), (e), (f) – where the claim limitations were directed to modifying a conventional internet hyperlink protocol to dynamically produce a dual-source hybrid webpage) and the McRO, Inc. v. Bandai Namco Games Am. Inc. case (see MPEP § 2106.05(a), (b) – where the claim limitations were directed to a method for automatically animating lip synchronization and facial expression of three-dimensional characters), Applicant’s disclosure provides support for its claims as being concerned with the technical problem of improving efficiency and sterility by the use of augmented reality devices worn by a medical professional, particularly for medical procedures in operating rooms of healthcare facilities. See Applicant’s specification as filed on November 14, 2024, paragraph [0034]. Further, Examiner notes paragraphs [0038], [0042], [0052], [0056], and [0086] in Applicant’ specification as filed on November 14, 2024, as providing additional support for indicating that the augmented reality device provides capabilities including: (1) capturing video and images of the medical procedures (see Applicant’s specification as filed on November 14, 2024, paragraphs [0038] and [0052]); (2) touch-free commands via motion by the user (see Applicant’s specification as filed, paragraphs [0042] and [0086]); and (3) sharing video and images with other medical professionals during the medical procedures for guidance and training purposes (see Applicant’s specification as filed on November 14, 2024, paragraphs [0056]). All of these features are directly associated with the augmented reality device described in the claims, which Applicant discloses as helping to improve patient care and increase efficiency. See Applicant’s specification as filed on November 14, 2024, paragraph [0086].
Claim Rejections - 35 USC § 112(b)
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-7 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 1 recites a limitation directed to “a network connection of the augmented reality device with a computing device over a network, the computing device configured to generate feedback data for a portion of data using one or more user input devices associated with the computing device”. However, as claim 1 is currently drafted, it is not clear whether the portion of data that the feedback is generated for is referring to the data that was previously captured by the camera of the augmented reality device or whether the portion of data is referring to some other data (which is not directly mentioned in the claim). Examiner suggests that Applicant amend claim 1 to recite the following: “a network connection of the augmented reality device with a computing device over a network, the computing device configured to generate feedback data for a portion of the data captured by the camera using one or more user input devices associated with the computing device”, or make some other appropriate correction of course. For examination purposes, the portion of data that the feedback is generated for recited in claim 1 will be interpreted as the same as the portion of the data that was captured previously by the camera in claim 1 (i.e., data corresponding to the medical procedure), and the limitation recited in claim 1 will be read the same as “a network connection of the augmented reality device with a computing device over a network, the computing device configured to generate feedback data for a portion of the data captured by the camera using one or more user input devices associated with the computing device”.
Claims 2-7 are rejected under rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for similar reasons as the § 112(b) rejection applied to claim 1 described above (due to their individual dependencies on claim 1).
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1, 2, 4-8, 10-15, and 17-19 are rejected under 35 U.S.C. 102(a)(1) and (a)(2) as being anticipated by:
- Shakil et al. (Pub. No. US 2014/0222526).
Regarding claim 1,
- Shakil et al. (Pub. No. US 2014/0222526) discloses:
- a healthcare delivery system comprising (Shakil, paragraph [0015]; Paragraph [0015] discloses an embodiment of a system 100 for augmenting performance of a provider.):
- an augmented reality device including a camera configured to capture data corresponding to a medical procedure (Shakil, paragraphs [0018], [0019], [0024], and [0025]; Paragraph [0024] discloses that the computer device 600 can be a wearable head-mounted computing device 602, which can be the VUZIX M100 video eyewear device, Google Glass, Looxcie wearable camera device, a virtual reality headset (e.g., Oculus Rift), and/or any other similar head-mounted display device or wearable augmented reality device (i.e., an augmented reality device which includes a camera). Paragraph [0025] discloses that the video camera 620 can be configured to capture images and may be forward facing to capture at least a portion of the real-world view perceived by the user to generate an augmented reality where computer generated images appear to interact with the real-world view perceived by the user (i.e., capturing data). Paragraphs [0018] and [0019] disclose that these interactions may be interactions between a medical provider and a patient, including conversations between the provider and the patient, wherein the patient provides symptoms, progress, concerns, medication information, allergy information, insurance information, and/or any other suitable health-related information to the provider; transactions wherein the patient provides demographic and/or family history information to the provider; interactions wherein the provider facilitates performance or acquisition of lab tests for the patient; interactions wherein the provider generates image data (e.g., from x-rays, MRIs, CT scanning, ultrasound scanning, etc.) from the patient; interactions wherein the provider generates other health metric data (e.g., cardiology-related data, respiratory data) from the patient; and/or any other suitable interaction between the provider and the patient (see Shakil, paragraph [0018]); an interactive session wherein the provider is examining the patient in a clinical setting or in the examining room of an office or other healthcare facility and eliciting information from the patient by questioning the patient; interactions in a hospital emergency room; interactions in an operating suite where the patient is unconscious; and interactions in a patient’s home, research setting, etc. (see Shakil, paragraph [0019]) (i.e., examples showing that the captured medical multimedia data is medical multimedia in connection with a medical procedure).);
- a network connection of the augmented reality device with a computing device over a network (Shakil, paragraph [0021]; Paragraph [0021] discloses that the computing device 600 preferably enables transmission of data generated using the computing device 600 by way of a communication link 410 (e.g., a wired connection, a wireless connection) that can be configured to communicate with a remote device (i.e., a network connection of the augmented reality device with a computing device over a network).), the computing device configured to generate feedback data for a portion of data using one or more user input devices associated with the computing device (Shakil, paragraphs [0031] and [0054]; Paragraph [0031] discloses that the scribe cockpit 120 (i.e., an example of a remote device described in paragraph [0021] of Shakil, where the remote device is interpreted to be the equivalent of Applicant’s computing device) enables a scribe to receive information from interactions between a patient and the provider, which can be used to provide guidance and/or feedback to the provider (i.e., generating feedback data for a portion of the data captured by the camera using one or more user input devices associated with the computing device). Further, paragraph [0054] discloses that the set of tools [which the scribe is provided] include providing (1) options (e.g., by drop-down menus, by auto-completing partially inputted information) and (2) audio and/or video manipulation tools (e.g., rewind, fast forward, pause, accelerated playback, decelerated playback tools) that are controlled by an input module (e.g., mouse, keyboard, touchpad, foot pedals, etc.) (i.e., one or more user input devices associated with the computing device) to facilitate information retrieval for template completion and multimedia capture and incorporation of multimedia into content generated or prepared by the scribe and enables the scribe to provide real time and/or delayed feedback to the provider regarding aspects of the interactions with the patient (e.g., bedside manner comments) (i.e., generating feedback data for a portion of the data captured by the camera using one or more user input devices associated with the computing device) to improve performance.), the feedback data corresponding to the computing device moving an indicator, within an image or a video viewable on the augmented reality device, to select a portion of the image or the video (Shakil, paragraphs [0035] and [0054]; Paragraph [0035] discloses that the message client of the scribe cockpit interface 122 can allow the provider to transmit a query to the scribe, to which the scribe can transmit a response that resolves the query. In examples, the scribe can input an answer (e.g., by typing, by speaking, by providing a link to an answer, etc.) at the message client for transmission back to the provider; the scribe can use one of multiple tools, which are described in more detail below, including a tool to select graphics, tables, and manipulated screen shots (i.e., the feedback data includes selecting a portion of the image or video). Paragraph [0054] discloses that the provided audio and/or video manipulation tools can facilitate multimedia capture and incorporation of multimedia (e.g., selected image/video clips, edited image/videos) (i.e., selecting a portion of the image or the video) into content generated or prepared by the scribe (e.g., as in multimedia-laden EHR notes). In variations of the set of tools including provision of audio and/or video streams to the scribe, the set of tools can also enable to provide real time and/or delayed feedback to the provider regarding aspects of the interactions with the patient (e.g., bedside manner comments) to improve performance (i.e., the feedback data corresponds to moving an indicator within the image viewable on the augmented reality device to select a portion of the image or video).); and
- a display of the augmented reality device configured to present the feedback data incorporated with the portion of the data superimposed over a real-world view at the display (Shakil, paragraphs [0024] and [0061], Paragraph [0061] discloses that the feedback can be received at an embodiment of the provider workstation and/or the mobile provider interface (i.e., presenting the feedback from the third party device on the display of the provider’s augmented reality device). Paragraph [0024] discloses that combining displaying capabilities and transparency can facilitate an augmented reality or heads-up display wherein a projected image or graphic is superimposed over a real-world view as perceived by the user through the lens elements 610, 612 (i.e., the feedback data that is received from third party devices may be displayed on the transparent lens of the provider’s augmented reality device and super-imposed over the real-world view of the provider during patient encounters).), the feedback data and the portion of the data displayed simultaneously on the display (Shakil, paragraph [0029], [0054], [0060], and [0061]; Paragraph [0029] discloses that the computing device 600 (i.e., augmented reality device) with the mobile provider interface 110 allows a provider to summon information from one or more sources, and to receive a response (e.g., at the computing device). The sources can be electronic databases, scheduling systems and tools, electronic information sources (e.g., Wikipedia, PUBMED, UPTODATE, EPOCRATES), and electronic health records (i.e., medical multimedia data), can be mediated by a scribe operating at a scribe cockpit 120. In variations, the response can be provided and/or rendered at a display of a computing device 600 accessible by the provider during interactions with the patient (i.e., displaying the medical multimedia data on the display of the provider’s augmented reality device), and/or during review of content generated by the scribe (i.e., displaying feedback data that is transmitted from the third party device). Further, paragraph [0054] discloses that the scribe tools can also enable a scribe (i.e., a third party) to provide real time and/or delayed feedback to the provider regarding aspects of the interactions with the patient (e.g., bedside manner comments) to improve performance. Paragraph [0060] discloses that the feedback to the scribe and/or another entity (e.g., regarding quality of content generated by the scribe) in a qualitative (e.g., free form text/verbal feedback) and/or quantitative (e.g., using a scale of values) manner. Paragraph [0061] discloses that the feedback can be received at an embodiment of the provider workstation and/or the mobile provider interface (i.e., where paragraph [0015] discloses that the mobile provide interface is coupled to the display of the augmented reality device worn by the provider). Since the feedback can be provided to the provider’s computing device in real time in the form of free text or scales of values, this disclosure is interpreted as being the equivalent of providing the feedback data simultaneously with the other health information from various source described in paragraph [0029] (i.e., the medical multimedia data) (also see paragraph [0015] and Figure 1 where the mobile provider interface 110 is coupled to the display 112 worn by the provider). Therefore, Shakil explicitly discloses that the feedback data is displayed simultaneously on the provider’s display on the provider’s goggles, in real-time, during the medical encounter with the health information from the various sources in paragraph [0029].).
Regarding claim 2,
- Shakil discloses the limitations of claim 1 (which claim 2 depends on), as described above.
- Shakil further discloses a system, wherein:
- the augmented reality device includes a wearable headset device (Shakil, paragraph [0024]; Paragraph [0024] discloses that the computer device 600 can be a wearable head-mounted computing device 602 (i.e., the augmented reality device includes a wearable headset device), which can be the VUZIX M100 video eyewear device, Google Glass, Looxcie wearable camera device, a virtual reality headset (e.g., Oculus Rift), and/or any other similar head-mounted display device or wearable augmented reality device (i.e., examples of wearable headset devices).).
Regarding claim 4,
- Shakil discloses the limitations of claim 1 (which claim 4 depends on), as described above.
- Shakil further discloses a system, wherein:
- the augmented reality device includes a microphone (Shakil, Abstract and paragraph [0020]; Paragraph [0020] discloses that the computing device 600 includes an audio sensor, where the Abstract discloses that the computing device is a microphone.), and
- the data includes medical multimedia data generated with both the camera and the microphone (Shakil, paragraph [0032]; Paragraph [0032] discloses scribe cockpit interface 122 preferably couples to a display and a speaker, in order to transmit video and audio streams from provider-patient interactions (i.e., the data includes medical multimedia data generated with both the camera and the microphone).).
Regarding claim 5,
- Shakil discloses the limitations of claim 1 (which claim 5 depends on), as described above.
- Shakil further discloses a system, wherein:
- the data includes medical multimedia data manipulated in space responsive to one or more gestures detected by the camera (Shakil, paragraph [0054]; Paragraph [0054] discloses that the system includes a set of tools which provide audio and/or video manipulation tools (e.g., rewind, fast forward, pause, accelerated playback, decelerated playback tools) (i.e., one or more gestures that are detected by the camera) that are controlled by an input module (e.g., mouse, keyboard, touchpad, foot pedals, etc.) (i.e., the data includes medical multimedia data that is manipulated in space responsive to one or more gestures) to facilitate information retrieval for template completion. Paragraph [0054] discloses that providing audio and/or video manipulation tools can facilitate multimedia capture and incorporation of multimedia (e.g., selected image/video clips, edited image/videos) into content (i.e., the one or more gestures are detected by the camera to create the selected image/video clips and/or edited image/videos). Paragraph [0058] also discloses that the user interface can incorporate an input module (e.g., a voice command module) configured to receive inputs from the provider for review of content (i.e., detecting one or more gestures), such as receiving inputs from the provider configured to amend and/or highlight aspects of content generated by the scribe (i.e., the data includes medical multimedia data that is manipulated in space responsive to one or more gestures).).
Regarding claim 6,
- Shakil discloses the limitations of claim 1 (which claim 6 depends on), as described above.
- Shakil further discloses a system, wherein:
- the display includes a transparent lens of smartglasses (Shakil, paragraphs [0020] and [0024]; Paragraph [0020] discloses that the computing device 600 includes a display 112, where the display 112 can be an optical see-through display, an optical see-around display, or a video see-through display (i.e., the display of the augmented reality device may be a transparent lens). Paragraph [0024] teaches that the wearable head-mounted computing device 602 may be VUZIX M100 video eyewear device, Google Glass, Looxcie wearable camera device, a virtual reality headset (e.g., Oculus Rift), and/or any other similar head-mounted display device or wearable augmented reality device (i.e., the augmented reality device includes examples of smartglasses), and any of the lens elements 610, 612 can be formed of any material (e.g., polycarbonate, CR-39, TRIVEX) that can suitably display a projected image or graphic. Each lens element 610, 612 can also be sufficiently transparent to allow a user to see through the lens element (i.e., the display of the smartglasses includes a transparent lens).).
Regarding claim 7,
- Shakil discloses the limitations of claim 1 (which claim 7 depends on), as described above.
- Shakil further discloses a system, wherein:
- the portion of the data includes a video, and the feedback data is added to the video when the video is presented at the display (Paragraph [0054] discloses that the provided audio and/or video manipulation tools can facilitate multimedia capture and incorporation of multimedia (e.g., selected image/video clips, edited image/videos) into content generated or prepared by the scribe (e.g., as in multimedia-laden EHR notes) (i.e., the feedback data). In variations of the set of tools including provision of audio and/or video streams to the scribe (i.e., the portion of the data includes a video), the set of tools can also enable to provide real time and/or delayed feedback to the provider regarding aspects of the interactions with the patient (e.g., bedside manner comments) to improve performance (i.e., the feedback data is added to the video when the video is presented at the display of the augmented reality device).).
Regarding claim 8,
- Shakil et al. (Pub. No. US 2014/0222526) discloses:
- a method to deliver healthcare (Shakil, paragraph [0045]; Paragraph [0045] discloses a method 200 for augmenting performance of a provider.), the method comprising:
- capturing medical multimedia data with one or more sensors, the medical multimedia data being associated with a medical procedure (Shakil, paragraphs [0018], [0019], [0024], and [0025]; Paragraph [0024] discloses that the computer device 600 can be a wearable head-mounted computing device 602, which can be the VUZIX M100 video eyewear device, Google Glass, Looxcie wearable camera device, a virtual reality headset (e.g., Oculus Rift), and/or any other similar head-mounted display device or wearable augmented reality device (i.e., an augmented reality device which includes one or more sensors). Paragraph [0025] discloses that the video camera 620 can be configured to capture images and may be forward facing to capture at least a portion of the real-world view perceived by the user to generate an augmented reality where computer generated images appear to interact with the real-world view perceived by the user (i.e., capturing the medical multimedia data with the one or more sensors). Paragraphs [0018] and [0019] disclose that these interactions may be interactions between a medical provider and a patient, including conversations between the provider and the patient, wherein the patient provides symptoms, progress, concerns, medication information, allergy information, insurance information, and/or any other suitable health-related information to the provider; transactions wherein the patient provides demographic and/or family history information to the provider; interactions wherein the provider facilitates performance or acquisition of lab tests for the patient; interactions wherein the provider generates image data (e.g., from x-rays, MRIs, CT scanning, ultrasound scanning, etc.) from the patient; interactions wherein the provider generates other health metric data (e.g., cardiology-related data, respiratory data) from the patient; and/or any other suitable interaction between the provider and the patient (see Shakil, paragraph [0018]); an interactive session wherein the provider is examining the patient in a clinical setting or in the examining room of an office or other healthcare facility and eliciting information from the patient by questioning the patient; interactions in a hospital emergency room; interactions in an operating suite where the patient is unconscious; and interactions in a patient’s home, research setting, etc. (see Shakil, paragraph [0019]) (i.e., examples showing that the captured medical multimedia data is medical multimedia in connection with a medical procedure).);
- presenting the medical multimedia data at a display of an augmented reality device (Shakil, paragraph [0024]; Paragraph [0024] discloses that any of the lens elements 610, 612 can be formed of any material (e.g., polycarbonate, CR-39, TRIVEX) that can suitably display a projected image or graphic (i.e., presenting the medical multimedia data at a display of an augmented reality device); and the display capabilities and transparency can facilitate an augmented reality or heads-up display wherein a projected image or graphic is superimposed over a real-world view as perceived by the user through the lens elements 610, 612.);
- transmitting, using a network connection between the augmented reality device and a computing device, the medical multimedia data to the computing device in response to one or more input commands (Shakil, paragraph [0021]; Paragraph [0021] discloses that the computing device 600 preferably enables transmission of data generated using the computing device 600 by way of a communication link 410 (e.g., a wired connection, a wireless connection) that can be configured to communicate with a remote device (i.e., transmitting the medical multimedia data between the augmented reality device and the computing device, using a network connection). Paragraph [0045] discloses that the method 200 includes the provider being able to send a request (i.e., one or more input commands) to transmit at least one of a video stream and an audio stream, from a point of view of the provider during the set of interactions, to a scribe at a scribe cockpit S220; and subsequently transmit the communication from the provider to the scribe cockpit which includes content derived from the set of interactions (i.e., transmitting the medical multimedia data over the network in response to the one or more input commands).);
- obtaining feedback data associated with the medical multimedia data (Shakil, paragraphs [0031] and [0054]; Paragraph [0031] discloses that the scribe cockpit 120 (i.e., an example of a remote device described in paragraph [0021] of Shakil, where the remote device is interpreted to be the equivalent of Applicant’s computing device) enables a scribe to receive information from interactions between a patient and the provider, which can be used to provide guidance and/or feedback to the provider (i.e., obtaining feedback data associated with the medical multimedia). Further, paragraph [0054] discloses that the set of tools [which the scribe is provided] include providing (1) options (e.g., by drop-down menus, by auto-completing partially inputted information) and (2) audio and/or video manipulation tools (e.g., rewind, fast forward, pause, accelerated playback, decelerated playback tools) that are controlled by an input module (e.g., mouse, keyboard, touchpad, foot pedals, etc.) to facilitate information retrieval for template completion and multimedia capture and incorporation of multimedia into content generated or prepared by the scribe and enables the scribe to provide real time and/or delayed feedback to the provider regarding aspects of the interactions with the patient (e.g., bedside manner comments) (i.e., obtaining feedback data associated with the medical multimedia) to improve performance.), the feedback data corresponds to the computing device moving an indicator, within an image or a video viewable on the augmented reality device, to select a portion of the image or the video (Shakil, paragraphs [0035] and [0054]; Paragraph [0035] discloses that the message client of the scribe cockpit interface 122 can allow the provider to transmit a query to the scribe, to which the scribe can transmit a response that resolves the query. In examples, the scribe can input an answer (e.g., by typing, by speaking, by providing a link to an answer, etc.) at the message client for transmission back to the provider; the scribe can use one of multiple tools, which are described in more detail below, including a tool to select graphics, tables, and manipulated screen shots (i.e., the feedback data includes selecting a portion of the image or video). Paragraph [0054] discloses that the provided audio and/or video manipulation tools can facilitate multimedia capture and incorporation of multimedia (e.g., selected image/video clips, edited image/videos) (i.e., selecting a portion of the image or the video) into content generated or prepared by the scribe (e.g., as in multimedia-laden EHR notes). In variations of the set of tools including provision of audio and/or video streams to the scribe, the set of tools can also enable to provide real time and/or delayed feedback to the provider regarding aspects of the interactions with the patient (e.g., bedside manner comments) to improve performance (i.e., the feedback data corresponds to moving an indicator within the image viewable on the augmented reality device to select a portion of the image or video).); and
- presenting, at the display of the augmented reality device, the feedback data incorporated with the medical multimedia data (Shakil, paragraph [0029], [0054], [0060], and [0061]; Paragraph [0029] discloses that the computing device 600 (i.e., augmented reality device) with the mobile provider interface 110 allows a provider to summon information from one or more sources, and to receive a response (e.g., at the computing device). The sources can be electronic databases, scheduling systems and tools, electronic information sources (e.g., Wikipedia, PUBMED, UPTODATE, EPOCRATES), and electronic health records (i.e., medical multimedia data), can be mediated by a scribe operating at a scribe cockpit 120. In variations, the response can be provided and/or rendered at a display of a computing device 600 accessible by the provider during interactions with the patient (i.e., displaying the medical multimedia data on the display of the provider’s augmented reality device), and/or during review of content generated by the scribe (i.e., displaying feedback data that is transmitted from the third party device). Further, paragraph [0054] discloses that the scribe tools can also enable a scribe (i.e., a third party) to provide real time and/or delayed feedback to the provider regarding aspects of the interactions with the patient (e.g., bedside manner comments) to improve performance. Paragraph [0060] discloses that the feedback to the scribe and/or another entity (e.g., regarding quality of content generated by the scribe) in a qualitative (e.g., free form text/verbal feedback) and/or quantitative (e.g., using a scale of values) manner. Paragraph [0061] discloses that the feedback can be received at an embodiment of the provider workstation and/or the mobile provider interface (i.e., where paragraph [0015] discloses that the mobile provide interface is coupled to the display of the augmented reality device worn by the provider). Since the feedback can be provided to the provider’s computing device in real time in the form of free text or scales of values, this disclosure is interpreted as being the equivalent of providing the feedback data simultaneously with the other health information from various source described in paragraph [0029] (i.e., the medical multimedia data) (also see paragraph [0015] and Figure 1 where the mobile provider interface 110 is coupled to the display 112 worn by the provider). Therefore, Shakil explicitly discloses that the feedback data is displayed simultaneously on the provider’s display on the provider’s goggles, in real-time, during the medical encounter with the health information from the various sources in paragraph [0029].).).
Regarding claim 10,
- Shakil discloses the limitations of claim 8 (which claim 10 depends on), as described above.
- Shakil further discloses a method, wherein:
- the presenting of the medical multimedia data at the display of the augmented reality device includes superimposing the medical multimedia data over a real-world view through a transparent lens of the augmented reality device (Shakil, paragraphs [0020] and [0024]; Paragraph [0024] discloses that combining displaying capabilities and transparency can facilitate an augmented reality or heads-up display wherein a projected image or graphic is superimposed over a real-world view as perceived by the user through the lens elements 610, 612 (i.e., presenting the medical multimedia data at the display of the augmented reality device by superimposing the medical multimedia data over a real-word view of the augmented reality device).), where paragraph [0020] discloses that each lens element 610, 612 can also be sufficiently transparent to allow a user to see through the lens element (i.e., the display of the augmented reality device is a transparent lens).).
Regarding claim 11,
- Shakil discloses the limitations of claim 8 (which claim 11 depends on), as described above.
- Shakil discloses a method, further comprising:
- capturing a manipulation of the medical multimedia data in space using the one or more sensors (Shakil, paragraph [0054]; Paragraph [0054] discloses that the system includes a set of tools which provide audio and/or video manipulation tools (e.g., rewind, fast forward, pause, accelerated playback, decelerated playback tools) that are controlled by an input module (e.g., mouse, keyboard, touchpad, foot pedals, etc.) (i.e., capturing a manipulation of the medical multimedia data in space using the one or more sensors) to facilitate information retrieval for template completion.), the manipulation being captured based on an interpretation of a gesture via the one or more sensors to provide one or more input commands to the augmented reality device (Shakil, paragraph [0054]; Paragraph [0054] discloses that providing audio and/or video manipulation tools can facilitate multimedia capture and incorporation of multimedia (e.g., selected image/video clips, edited image/videos) into content (i.e., the manipulation is captured based on an interpretation of a gesture, being the input command from the user, via the one or more sensors, being the an input module).).
Regarding claim 12,
- Shakil discloses the limitations of claim 8 (which claim 12 depends on), as described above.
- Shakil further discloses a method, wherein:
- the medical multimedia data includes a video (Shakil, paragraph [0045]; Paragraph [0045] discloses that the method includes transmitting a request and at least one of a video stream and audio stream from a point of view of the provider during the set of interactions [with the patient] (i.e., the medical multimedia data includes a video), to a scribe at a scribe cockpit S220.), and the feedback data created by the computing device is incorporated into and displayed simultaneously with the video (Shakil, paragraph [0029], [0054], [0060], and [0061]; Paragraph [0029] discloses that the computing device 600 (i.e., augmented reality device) with the mobile provider interface 110 allows a provider to summon information from one or more sources, and to receive a response (e.g., at the computing device). The sources can be electronic databases, scheduling systems and tools, electronic information sources (e.g., Wikipedia, PUBMED, UPTODATE, EPOCRATES), and electronic health records (i.e., medical multimedia data), can be mediated by a scribe operating at a scribe cockpit 120. In variations, the response can be provided and/or rendered at a display of a computing device 600 accessible by the provider during interactions with the patient (i.e., displaying the medical multimedia data on the display of the provider’s augmented reality device), and/or during review of content generated by the scribe (i.e., displaying feedback data that is transmitted from the third party device). Further, paragraph [0054] discloses that the scribe tools can also enable a scribe (i.e., a third party) to provide real time and/or delayed feedback to the provider regarding aspects of the interactions with the patient (e.g., bedside manner comments) to improve performance. Paragraph [0060] discloses that the feedback to the scribe and/or another entity (e.g., regarding quality of content generated by the scribe) in a qualitative (e.g., free form text/verbal feedback) and/or quantitative (e.g., using a scale of values) manner. Paragraph [0061] discloses that the feedback can be received at an embodiment of the provider workstation and/or the mobile provider interface. Since the feedback can be provided to the provider’s computing device in real time in the form of free text or scales of values, this disclosure is interpreted as being the equivalent of providing the feedback data simultaneously with the other health information from various source described in paragraph [0029] (i.e., the medical multimedia data) (also see paragraph [0015] and Figure 1 where the mobile provider interface 110 is coupled to the display 112 worn by the provider). Therefore, Shakil explicitly discloses that the feedback data is displayed simultaneously on the provider’s display on the provider’s goggles, in real-time, during the medical encounter with the health information from the various sources in paragraph [0029].).
Regarding claim 13,
- Shakil discloses the limitations of claim 8 (which claim 13 depends on), as described above.
- Shakil further discloses a method, wherein:
- the one or more sensors includes a microphone (Shakil, paragraph [0020]; Paragraph [0020] discloses that the computing device includes an audio sensor (i.e., a microphone).) and a camera (Shakil, paragraph [0025]; Paragraph [0025] discloses that the computing device 600 includes a video camera 620.), and the camera is configured to capture the medical multimedia data in response to a verbal command received at the microphone (Shakil, paragraphs [0048] and [0058]; Paragraph [0058] generally discloses that the user interface can incorporate a display configured to present information to the provider, and an input module (e.g., keyboard, mouse, touchpad, touchscreen, voice command module, etc.) configured to receive inputs from the provider for review of content (i.e., receiving verbal commands through the voice command module), and is capable of receiving inputs from the provider configured to amend and/or highlight aspects of content generated by the scribe (i.e., receiving verbal commands to capture the medical multimedia data). For example, paragraph [0048] discloses that the provider can interface with the computing device verbally. In the specific examples of Block S210, the provider can request to pull information from an EHR (e.g., the provider can request cell counts and other metrics related to the patient's health from an EHR), wherein the request is performed by a combination of verbal commands (i.e., capturing the medical multimedia data in response to a verbal command received at the microphone).).
Regarding claim 14,
- Shakil et al. (Pub. No. US 2014/0222526) discloses:
- a non-transitory computer-readable storage media storing computer-executable instructions that, when executed by one or more processors, cause operations comprising (Shakil, paragraph [0064]; Paragraph [0064] discloses that various processes of the preferred method can be embodied and/or implemented at least in part as a machine configured to receive a computer-readable medium storing computer-readable instructions (i.e., a non-transitory computer-readable storage medium), where the instructions are preferably executed by computer-executable components preferably integrated with a system and one or more portions of the control module 155 and/or a processor (i.e., the instructions are executed by one or more processors).):
- capturing medical multimedia data with one or more sensors, the medical multimedia data being associated with a medical procedure (Shakil, paragraphs [0018], [0019], [0024], and [0025]; Paragraph [0024] discloses that the computer device 600 can be a wearable head-mounted computing device 602, which can be the VUZIX M100 video eyewear device, Google Glass, Looxcie wearable camera device, a virtual reality headset (e.g., Oculus Rift), and/or any other similar head-mounted display device or wearable augmented reality device (i.e., an augmented reality device which includes one or more sensors). Paragraph [0025] discloses that the video camera 620 can be configured to capture images and may be forward facing to capture at least a portion of the real-world view perceived by the user to generate an augmented reality where computer generated images appear to interact with the real-world view perceived by the user (i.e., capturing the medical multimedia data with the one or more sensors). Paragraphs [0018] and [0019] disclose that these interactions may be interactions between a medical provider and a patient, including conversations between the provider and the patient, wherein the patient provides symptoms, progress, concerns, medication information, allergy information, insurance information, and/or any other suitable health-related information to the provider; transactions wherein the patient provides demographic and/or family history information to the provider; interactions wherein the provider facilitates performance or acquisition of lab tests for the patient; interactions wherein the provider generates image data (e.g., from x-rays, MRIs, CT scanning, ultrasound scanning, etc.) from the patient; interactions wherein the provider generates other health metric data (e.g., cardiology-related data, respiratory data) from the patient; and/or any other suitable interaction between the provider and the patient (see Shakil, paragraph [0018]); an interactive session wherein the provider is examining the patient in a clinical setting or in the examining room of an office or other healthcare facility and eliciting information from the patient by questioning the patient; interactions in a hospital emergency room; interactions in an operating suite where the patient is unconscious; and interactions in a patient’s home, research setting, etc. (see Shakil, paragraph [0019]) (i.e., examples showing that the captured medical multimedia data is medical multimedia in connection with a medical procedure).);
- transmitting the medical multimedia data to a remote computing device using one or more network connections (Shakil, paragraph [0021]; Paragraph [0021] discloses that the computing device 600 preferably enables transmission of data generated using the computing device 600 by way of a communication link 410 (e.g., a wired connection, a wireless connection) that can be configured to communicate with a remote device (i.e., transmitting the medical multimedia data between the augmented reality device and a remote computing device, using a network connection). Paragraph [0045] discloses that the method 200 includes the provider being able to send a request (i.e., one or more input commands) to transmit at least one of a video stream and an audio stream, from a point of view of the provider during the set of interactions, to a scribe at a scribe cockpit S220; and subsequently transmit the communication from the provider to the scribe cockpit which includes content derived from the set of interactions (i.e., transmitting the medical multimedia data over the network in response to the one or more input commands).);
- obtaining feedback data associated with the medical multimedia data (Shakil, paragraphs [0031] and [0054]; Paragraph [0031] discloses that the scribe cockpit 120 (i.e., an example of a remote device described in paragraph [0021] of Shakil, where the remote device is interpreted to be the equivalent of Applicant’s computing device) enables a scribe to receive information from interactions between a patient and the provider, which can be used to provide guidance and/or feedback to the provider (i.e., obtaining feedback data associated with the medical multimedia). Further, paragraph [0054] discloses that the set of tools [which the scribe is provided] include providing (1) options (e.g., by drop-down menus, by auto-completing partially inputted information) and (2) audio and/or video manipulation tools (e.g., rewind, fast forward, pause, accelerated playback, decelerated playback tools) that are controlled by an input module (e.g., mouse, keyboard, touchpad, foot pedals, etc.) to facilitate information retrieval for template completion and multimedia capture and incorporation of multimedia into content generated or prepared by the scribe and enables the scribe to provide real time and/or delayed feedback to the provider regarding aspects of the interactions with the patient (e.g., bedside manner comments) (i.e., obtaining feedback data associated with the medical multimedia) to improve performance.), the feedback data corresponds to the remote computing device moving an indicator, within an image or a video viewable on an augmented reality device, to select a portion of the image or the video (Shakil, paragraphs [0035] and [0054]; Paragraph [0035] discloses that the message client of the scribe cockpit interface 122 can allow the provider to transmit a query to the scribe, to which the scribe can transmit a response that resolves the query. In examples, the scribe can input an answer (e.g., by typing, by speaking, by providing a link to an answer, etc.) at the message client for transmission back to the provider; the scribe can use one of multiple tools, which are described in more detail below, including a tool to select graphics, tables, and manipulated screen shots (i.e., the feedback data includes selecting a portion of the image or video). Paragraph [0054] discloses that the provided audio and/or video manipulation tools can facilitate multimedia capture and incorporation of multimedia (e.g., selected image/video clips, edited image/videos) (i.e., selecting a portion of the image or the video) into content generated or prepared by the scribe (e.g., as in multimedia-laden EHR notes). In variations of the set of tools including provision of audio and/or video streams to the scribe, the set of tools can also enable to provide real time and/or delayed feedback to the provider regarding aspects of the interactions with the patient (e.g., bedside manner comments) to improve performance (i.e., the feedback data corresponds to moving the remote device moving an indicator within the image viewable on the augmented reality device to select a portion of the image or video).); and
- outputting the feedback data incorporated with the medical multimedia data for display using a display device of the augmented reality device (Shakil, paragraph [0029], [0054], [0060], and [0061]; Paragraph [0029] discloses that the computing device 600 (i.e., the augmented reality device) with the mobile provider interface 110 allows a provider to summon information from one or more sources, and to receive a response (e.g., at the computing device). The sources can be electronic databases, scheduling systems and tools, electronic information sources (e.g., Wikipedia, PUBMED, UPTODATE, EPOCRATES), and electronic health records (i.e., medical multimedia data), can be mediated by a scribe operating at a scribe cockpit 120. In variations, the response can be provided and/or rendered at a display of a computing device 600 accessible by the provider during interactions with the patient (i.e., displaying the medical multimedia data on the display of the provider’s augmented reality device), and/or during review of content generated by the scribe (i.e., displaying feedback data that is transmitted from the third party device). Further, paragraph [0054] discloses that the scribe tools can also enable a scribe (i.e., a third party) to provide real time and/or delayed feedback to the provider regarding aspects of the interactions with the patient (e.g., bedside manner comments) to improve performance. Paragraph [0060] discloses that the feedback to the scribe and/or another entity (e.g., regarding quality of content generated by the scribe) in a qualitative (e.g., free form text/verbal feedback) and/or quantitative (e.g., using a scale of values) manner. Paragraph [0061] discloses that the feedback can be received at an embodiment of the provider workstation and/or the mobile provider interface (i.e., where paragraph [0015] discloses that the mobile provide interface is coupled to the display of the augmented reality device worn by the provider). Since the feedback can be provided to the provider’s computing device in real time in the form of free text or scales of values, this disclosure is interpreted as being the equivalent of outputting the feedback data simultaneously and incorporated with the other health information from various source described in paragraph [0029] (i.e., the medical multimedia data) (also see paragraph [0015] and Figure 1 where the mobile provider interface 110 is coupled to the display 112 worn by the provider). Therefore, Shakil explicitly discloses that the feedback data is displayed simultaneously on the provider’s display on the provider’s goggles, in real-time, during the medical encounter with the health information from the various sources in paragraph [0029].).).
Regarding claim 15,
- Shakil discloses the limitations of claim 14 (which claim 15 depends on), as described above.
- Shakil further discloses a non-transitory computer-readable storage medium, wherein:
- the augmented reality device includes smartglasses, and the one or more sensors form at least part of a sensor assembly of the smartglasses (Shakil, paragraphs [0020] and [0024]; Paragraph [0020] discloses that the computing device 600 includes a display 112, where the display 112 can be an optical see-through display, an optical see-around display, or a video see-through display. Paragraph [0024] teaches that the wearable head-mounted computing device 602 may be VUZIX M100 video eyewear device, Google Glass, Looxcie wearable camera device, a virtual reality headset (e.g., Oculus Rift), and/or any other similar head-mounted display device or wearable augmented reality device (i.e., the augmented reality device includes examples of smartglasses), and any of the lens elements 610, 612 can be formed of any material (e.g., polycarbonate, CR-39, TRIVEX) that can suitably display a projected image or graphic. Each lens element 610, 612 can also be sufficiently transparent to allow a user to see through the lens element (i.e., the one or more sensors form at least part of a sensor assembly of the smartglasses).).
Regarding claim 17,
- Shakil discloses the limitations of claim 14 (which claim 17 depends on), as described above.
- Shakil further discloses a non-transitory computer-readable storage medium, wherein:
- the capturing the medical multimedia data is responsive to detecting, with the one or more sensors, a first gesture (Shakil, paragraph [0054]; Paragraph [0054] discloses that the system includes a set of tools which provide audio and/or video manipulation tools (e.g., rewind, fast forward, pause, accelerated playback, decelerated playback tools) that are controlled by an input module (e.g., mouse, keyboard, touchpad, foot pedals, etc.) (i.e., a first gesture is detected with the one or more sensors) to facilitate information retrieval for template completion. Paragraph [0054] discloses that providing audio and/or video manipulation tools can facilitate multimedia capture and incorporation of multimedia (e.g., selected image/video clips, edited image/videos) into content (i.e., capturing the medical multimedia data in response to detecting the first gesture).).; and
- the transmitting of the medical multimedia data is responsive to detecting, with the one or more sensors, a second gesture (Shakil, paragraph [0048]; Paragraph [0048] discloses that the provider can interface with the computing device verbally. In the specific examples of Block S210, the provider can request to pull information from an EHR (e.g., the provider can request cell counts and other metrics related to the patient's health from an EHR), wherein the request is performed by a combination of verbal commands (i.e., transmitting the medical multimedia data in response to detecting a second gesture with the one or more sensors).).
Regarding claim 18,
- Shakil discloses the limitations of claim 14 (which claim 18 depends on), as described above.
- Shakil further discloses a non-transitory computer-readable storage medium, wherein:
- the feedback data includes results of a measurement performed based on the medical multimedia data (Shakil, paragraph [0063]; Paragraph [0063] discloses that the method 200 can facilitate the provider in measuring features of a patient encountered during diagnosis or treatment (e.g., incision dimensions, tissue morphological dimensions, etc.) (i.e., the feedback data that is received from other computing devices will be based on medical multimedia data that includes results of a measurement performed based on the medical multimedia data).).
Regarding claim 19,
- Shakil discloses the limitations of claim 18 (which claim 19 depends on), as described above.
- Shakil further discloses a non-transitory computer-readable storage medium, wherein:
- the measurement includes a distance measurement on a patient (Shakil, paragraph [0063]; Paragraph [0063] discloses that the method 200 can facilitate the provider in measuring features of a patient encountered during diagnosis or treatment (e.g., incision dimensions, tissue morphological dimensions, etc.) (i.e., measuring the dimensions features of the patient, such as incision dimensions or tissue morphological dimensions, naturally includes a distance measurement on a patient, i.e., measuring the length, width, and/or height of an incision).).
Claim Rejections - 35 USC § 103
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 3 and 9 are rejected under 35 U.S.C. 103 as being unpatentable over:
- Shakil et al. (Pub. No. US 2014/0222526), in view of:
- Zhang et al. (Pub. No. US 2017/0064214).
Regarding claim 3,
- Shakil discloses the limitations of claim 1 (which claim 3 depends on), as described in the Claim Rejections - 35 U.S.C. § 102 Section above.
- Shakil further teaches a system, wherein:
- the augmented reality device includes one or more sensors configured to detect one or more gestures (Shakil, paragraph [0023]; Paragraph [0023] teaches that the computing device 600 can include sensors and elements for eye-tracking and gestural detection (e.g., wink detection) (i.e., one or more sensors configured to detect one or more gestures).).
- Shakil does not explicitly teach, however, in analogous art of systems and methods which utilize augmented reality devices, Zhang et al. (Pub. No. US 2017/0064214) teaches a system, wherein:
- the camera is turned on in response to detecting the one or more gestures via the one or more sensors (Zhang, paragraphs [0108] and [0388]; Paragraph [0388] teaches that the image capturing apparatus 100 starts a camera application up. The camera application may be started up according to a user operation. For example, if it is detected that the user clicks an icon of the camera application, the camera application is started up (i.e., the camera is turned on in response to detecting one or more gestures via the one or more sensors). Alternatively, if a voice command used to start the camera application up is detected, the camera application may be started up (i.e., the camera is turned on in response to detecting one or more gestures via the one or more sensors). Paragraph [0108] teaches that this feature is beneficial for performing image signal processing for image quality improvement.).
Therefore, it would have been obvious to one of ordinary skill in the art of systems and methods which utilize augmented reality devices at the time of the effective filing date of the claimed invention to modify the method for augmenting performance of a provider taught by Shakil, to incorporate a step and feature directed to turning on the camera in response to detecting one or more gestures via the one or more sensors, as taught by Zhang, in order to perform image signal processing for image quality improvement. See Zhang, paragraph [0108]; see also MPEP § 2143 G.
Regarding claim 9,
- Shakil discloses the limitations of claim 8 (which claim 9 depends on), as described in the Claim Rejections - 35 U.S.C. § 102 Section above.
- Shakil does not explicitly teach, however, in analogous art of systems and methods which utilize augmented reality devices, Zhang et al. (Pub. No. US 2017/0064214) teaches a method, further comprising:
- detecting, with one or more sensors, a gesture to turn on the display of the augmented reality device (Zhang, paragraphs [0108] and [0388]; Paragraph [0388] teaches that the image capturing apparatus 100 starts a camera application up. The camera application may be started up according to a user operation. For example, if it is detected that the user clicks an icon of the camera application, the camera application is started up (i.e., turning on the camera is interpreted as turning on the display of the augmented reality device in response to detecting one or more gestures via the one or more sensors). Alternatively, if a voice command used to start the camera application up is detected, the camera application may be started up (i.e., turning on the camera is interpreted as turning on the display of the augmented reality device in response to detecting one or more gestures via the one or more sensors). Paragraph [0108] teaches that this feature is beneficial for performing image signal processing for image quality improvement.).
Therefore, it would have been obvious to one of ordinary skill in the art of systems and methods which utilize augmented reality devices at the time of the effective filing date of the claimed invention to modify the system for augmenting performance of a provider taught by Shakil, to incorporate a step and feature directed to turning on the display of the augmented reality device in response to detecting one or more gestures via the one or more sensors, as taught by Zhang, in order to perform image signal processing for image quality improvement. See Zhang, paragraph [0108]; see also MPEP § 2143 G.
Claims 16 is rejected under 35 U.S.C. 103 as being unpatentable over:
- Shakil et al. (Pub. No. US 2014/0222526), in view of:
- Doo et al. (Pub. No. US 2017/0042631).
Regarding claim 16,
- Shakil discloses the limitations of claim 14 (which claim 16 depends on), as described in the Claim Rejections - 35 U.S.C. § 102 Section above.
- Shakil does not explicitly teach, however, in analogous art of systems and methods which utilize augmented reality devices, Doo et al. (Pub. No. US 2017/0042631) teaches a non-transitory computer-readable storage medium, wherein:
- the operations include converting at least the portion of the medical multimedia data into a compatible form associated with an operating system of the remote computing device (Doo, paragraph [0066]; Paragraph [0066] teaches that the input codec 74 can receive the image file 42 from the image source 44 and decompress the image file 42. If the image defined by the image file 42 is not to be modified or analyzed, the decompressed image file 42 can be transmitted to the transcoder 76. The transcoder 76 can convert to the image file 42 to a different format of similar or like quality to gain compatibility with another program or application, if necessary (i.e., converting at least the portion of the medical multimedia data into a compatible form associated with an operating system of the remote computing device). Paragraph [0066] teaches that this feature is beneficial for gaining compatibility with another program or application.).
Therefore, it would have been obvious to one of ordinary skill in the art of systems and methods which utilize augmented reality devices at the time of the effective filing date of the claimed invention to modify the non-transitory computer-readable storage medium for augmenting performance of a provider taught by Shakil, to incorporate a step and feature directed to converting at least the portion of the medical multimedia data into compatible form associated with an operating system of the remote computing device, as taught by Doo, in order to gain compatibility with another program or application. See Doo, paragraph [0066]; see also MPEP § 2143 G.
Claims 20 is rejected under 35 U.S.C. 103 as being unpatentable over:
- Shakil et al. (Pub. No. US 2014/0222526), in view of:
- Saget et al. (Pub. No. US 2019/0122330).
Regarding claim 20,
- Shakil discloses the limitations of claim 14 (which claim 20 depends on), as described in the Claim Rejections - 35 U.S.C. § 102 Section above.
- Shakil teaches a non-transitory computer-readable storage medium, wherein:
- detecting one or more gestures with the one or more sensors (Shakil, paragraph [0023]; Paragraph [0023] teaches that the computing device 600 can include sensors and elements for eye-tracking and gestural detection (e.g., wink detection) (i.e., detecting one or more gestures with the one or more sensors).).
- Shakil does not explicitly teach, however, in analogous art of systems and methods which utilize augmented reality devices, Saget et al. (Pub. No. US 2019/0122330) teaches a non-transitory computer-readable storage medium, wherein:
- storing the medical multimedia data at a storage location responsive to
detecting the one or more gestures (Saget, paragraphs [0086] and [0092]; Paragraph [0086] teaches that the system 1 allows the user 155 to see critical work information right in their field-of-image using a see-through visual display and then interact with it using familiar gestures, voice commands, and motion tracking (i.e., the one or more gestures). The data can be stored in data storage (i.e., storing the medical multimedia data at a storage location responsive to detecting the one or more gestures). Paragraph [0092] teaches that this feature is beneficial for storage and retrieval of preoperative medical images, including any metadata associated with these images and the ability to query those metadata.).
Therefore, it would have been obvious to one of ordinary skill in the art of systems and methods which utilize augmented reality devices at the time of the effective filing date of the claimed invention to modify the non-transitory computer-readable storage medium for augmenting performance of a provider taught by Shakil, to incorporate a step and feature directed to storing the medical multimedia data at a storage location responsive to detecting the one or more gestures, as taught by Saget, in order to store and retrieve preoperative medical images, including any metadata associated with these images and the ability to query those metadata. See Saget, paragraph [0092]; see also MPEP § 2143 G.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Nicholas Akogyeram II whose telephone number is (571) 272-0464. The examiner can normally be reached Monday - Friday, between 8:00am - 5:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jason Dunham can be reached at (571) 272-8109. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
Official replies to this Office action may now be submitted electronically by registered
users of the EFS-Web system. Information on EFS-Web tools is available on the Internet at:
http://www.uspto.gov/patents/processlfi!elefslguidance/index.isp. An EFS-Web Quick-Start
Guide is available at: http://www.uspto.gov/ebc/portallefslquick-start.pdf.
Alternatively, official replies to this Office Action may still be submitted by any one of fax, mail, or hand delivery.
Faxed replies should be directed to the central fax at (571) 273-8300.
Mailed replies should be addressed to:
United States Patent and Trademark Office:
Commissioner of Patents and Trademarks
P.O. Box 1450
Alexandria, VA 22313-1450
Hand delivered responses should be brought to the United States Patent and Trademark Office Customer Service Window:
Randolph Building
401 Dulany Street
Alexandria, VA 22314-1450
/N.A.A./Examiner, Art Unit 3686
/JONATHON A. SZUMNY/Primary Examiner, Art Unit 3686