Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 1-14 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
In claims 1, line 12 and claim 6, line 12 the claims recite: “….receiving a reference object from an online database.”
The specification does not address a “reference object” from an online database. Correction is required.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-17, 19 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Rudzicz (WO2022/104477).
With respect to claim 1, Rudzicz teaches a system, shown by figure 1, for monitoring (para. 36 lines 1-2) and recording (para. 89, line 1) in a surgical environment (see para. 85) with real-time analysis (para. 162, line 3) and video overlays, comprising:
a plurality of cameras ( para. 90, lines 1-2 and para. 91, lines 5-6) to capture visual data within the medical environment, see para. 91, lines 1-3).
Rudzicz teaches cameras with microphones (para. 166, line 10, see also para. 136, last three lines). The purpose of the microphones is to capture audio data of the surgical or monitoring process.
Rudzicz teaches a plurality of sensors 34, for measuring patient data, see para. 166, beginning at line 10.
Rudzicz teaches a computer system, described at para. 55, lines 1-3, having a (processor 104 and a memory unit 110 for receiving patient data (for example, video data, see also para. 166) wherein the processing unit is configured to receive patient data from the visual data from cameras (see para. 90, line 1 and para. 127) and audio data from audio device.
Rudzicz teaches wherein the processing unit is instructed to execute video encoding processes, see for example, para. 165 line 1 regarding medical data encoder 22.
Rudzicz teaches a plurality of cameras for receiving an input image wherein the image is medical or surgically related, (see para. 91).
Rudzicz teaches a reference object, such as a feature vector identifying certain features, with a bounding box, within the surgical video. The features within the bounding box could be the head portions of those in the operating room performing medical services. This information has identification in the video stream for at least the reason that designated image information portions can be blurred. The reference object is stored on a database is suggested at the last few lines of para. 134. The identification of the video stream includes feature vectors that are transferred and held in a data centre (database as claimed).
Rudzicz teaches overlay of a bounding box in proximity to the feature vectors which describe the objects a user might want identified, (i.e., the number of person in the operating room). The bounding box has display coordinates, see para. 132, lines 8 and 9.
Rudzicz teaches overlaying the operating room floorplan data structure with detected movement of individuals in between sterile and non-sterile fields, (see para. 104). Rudzicz teaches overlay of a bounding box in proximity to the feature vectors which describe the objects a user might want identified, (i.e., the number of person in the operating room). The bounding box has display coordinates, see para. 132, lines 8 and 9.
What is not taught by Rudzicz is the receiving audio with respect to one of the cameras with a microphone; converting the audio to text; transmitting the text to a word database and training an AI model using the word database.
With respect to receiving audio associated with a camera, Rudzicz teaches multiplexing sound with the video, para. 126 and the use of audio devices 132 in para. 176. Rudzicz does not define the audio devices use a microphone.
Goodwin teaches a plurality of cameras 106 and 108, as set forth in at least para. 22. At para. 28, Goodwin teaches video processing with the cameras and validating spatial movement by audio-processing where the audio signals are generated from microphone(s) 102. See also the bottom of para.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to substitute microphones as a means of audio devices for capturing speech through a video event as taught by Goodwin.
With respect to converting audio into parsed text, Goodwin teaches this feature as set forth in para. 35. Goodwin teaches “….transcribing speech to text by feeding audio signals from microphones 102 to an AI natural language processing model and audio voice transcription via deep neural networks and text enhancements.
It would have been obvious to one of ordinary skill in the art, before the effective filing data of the claimed invention, to modify the processor 104 of Rudzicz so that it includes an audio version algorithm as described by Goodwin for the purpose of changing speech to text.
With respect to transmitting parsed text to a word database, Goodwin teaches microphones for recording audible speech. AI models, convert speech to text. A cloud server 122 analysis the text data using classification modules. Goodwin teaches data being captured and stored, see para. 51, lines 5-7. See also para. 52.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to combine the teaching of Goodwin with Rudzicz by using a speech to text converter for converting the text in such a way as to store the content in a database as suggested by Goodwin.
Regarding training an AI algorithm using the spoken word database, Rudzicz teaches using AI for identifying persons in the operating space. Goodwin also uses AI to extract and accumulate procedures in an operating or training room, see para. 23. At para. 23, Goodwin teaches using AI in order to locate and identify instruments used in the medical procedure.
At the bottom of page 35, Goodwin teaches using speech converted to text and that text enhancements are used in deep neural networks to achieve textual transcripts to identify the speaker. It is presumed that the text information is used in the AI algorithm for identifying the speaker who generated the text. Hence, it would have been obvious or recognized by one of ordinary skill in the art, before the effective filing date of the claimed invention, that in the same way that both Rudzicz and Goodwin use AI to identify heads of those in the operating space, similarly as suggested by Goodwin voice prints/ spoken words can be used in an AI training sequence to identify the voices of those corresponding to those identified in the operating space.
Since, Rudzicz and Goodwin are both directed to monitoring events in an operating room, using video recording, the purpose of converting the speech to words would have been recognized by Rudzicz as set forth by Goodwin.
It would have been obvious to one of ordinary skill in the art, before the effective filing data of the claimed invention, to modify the processor 104 of Rudzicz so that it includes an audio version algorithm as described by Goodwin for the purpose of changing speech to text.
With respect to claim 2, see the top of page 35 regarding at least the “pressure sensor”.
With respect to claim 3, Rudzicz teaches a communication interface 106, operable over a secure network, (see para. 122), for transmission and reception of voice, audio and patient information. See para 136 and para. 173 for video and audio. See para. 176 lines 9-10 regarding patient records. See also figure 1 of Rudzicz.
With respect to claim 4, Rudzicz teaches a plurality of cameras 108 located on a surgical mount for microphone 102 and tacking sensor receiver 104. See the middle of figure 3.
With respect to claim 5, Rudzicz teaches the display coordinates but does not state that they are chroma-keyed.
However, Rudzicz teaches that a feature can be an individual characteristic of what is being observed, para. 128, lines 1-2. Moreover, Rudzicz teaches that the feature could be a specific color or shape. See also para. 128, lines 6-7. At para. 132, Pudzicz teaches a vector of bounding boxes (see line 5). Video operating platform 100 can detect approximately 40 heads at a time and can save the coordinates of all heads.
Rudzicz does not teach a chroma-key for each of the at least 40 bounding boxes.
Goodwin teaches an overlay of markers such that when an unauthorized person is in a prohibited zone, the user interface can initate a breach alter and change the color of a section of the video region from green to yellow or from green to red.
(See the bottom of para. 43).
The Examiner contends that it would have been obvious to one of ordinary skill in the art to set a color for each of the distinct bounding boxes based on different characteristics of those heads detected by the platform, as suggested by para. 128 of Rudzicz.
With respect to claim 6, Rudzicz teaches a method, illustrated by figures 2 and 3, for monitoring (para. 36 lines 1-2) and recording (para. 89, line 1) in a surgical environment (see para. 85) with real-time analysis (para. 162, line 3) and video overlays, comprising:
a plurality of cameras ( para. 90, lines 1-2 and para. 91, lines 5-6) to capture visual data within the medical environment, see para. 91, lines 1-3).
Rudzicz teaches that the cameras are useful to capture the medical environment, such as the surgical room, including the regions of the room that are restricted to sterile fields and non-sterile fields. Rudzicz teaches cameras with microphones (para. 166, line 10, see also para. 136, last three lines). The purpose of the microphones is to capture audio data of the surgical or monitoring process.
Rudzicz teaches deriving patient information by a plurality of sensors 34, for measuring patient data, see para. 166, beginning at line 10.
Rudzicz teaches a computer system, described at para. 55, lines 1-3, having a (processor 104 and a memory unit 110 for receiving patient data (for example, video data, see also para. 166) wherein the processing unit is configured to receive patient data from the visual data from cameras (see para. 90, line 1 and para. 127) and audio data from audio device.
Rudzicz teaches wherein the processing unit is instructed to execute video encoding processes, see for example, para. 165 line 1 regarding medical data encoder 22.
Rudzicz teaches a plurality of cameras for receiving an input image wherein the image is medical or surgically related, (see para. 91).
Rudzicz teaches a reference object, such as a feature vector identifying certain features, with a bounding box, within the surgical video. The features within the bounding box could be the head portions of those in the operating room performing medical services. This information has identification in the video stream for at least the reason that designated portions can be blurred. The reference object is stored on a database is suggested at the last few lines of para. 134. The identification of the video stream includes feature vectors that are transferred and held in a data centre (database as claimed).
Rudzicz teaches overlay of a bounding box in proximity to the feature vectors which describe the objects a user might want identified, (i.e., the number of person in the operating room). The bounding box has display coordinates, see para. 132, lines 8 and 9.
Rudzicz teaches overlaying the operating room floorplan data structure with detected movement of individuals in between sterile and non-sterile fields, (see para. 104). Rudzicz teaches overlay of a bounding box in proximity to the feature vectors which describe the objects a user might want identified, (i.e., the number of person in the operating room). The bounding box has display coordinates, see para. 132, lines 8 and 9.
What is not taught by Rudzicz is the receiving audio with respect to one of the cameras with a microphone; converting the audio to text; transmitting the text to a word database and training an AI model using the word database.
With respect to receiving audio associated with a camera, Rudzicz teaches multiplexing sound with the video, para. 126 and the use of audio devices 132 in para. 176. Rudzicz does not define the audio devices use a microphone.
Goodwin teaches a plurality of cameras 106 and 108, as set forth in at least para. 22. At para. 28, Goodwin teaches video processing with the cameras and validating spatial movement by audio-processing where the audio signals are generated from microphone(s) 102. See also the bottom of para.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to substitute microphones as a means of audio devices for capturing speech through a video event as taught by Goodwin.
With respect to converting audio into parsed text, Goodwin teaches this feature as set forth in para. 35. Goodwin teaches “….transcribing speech to text by feeding audio signals from microphones 102 to an AI natural language processing model and audio voice transcription via deep neural networks and text enhancements.
It would have been obvious to one of ordinary skill in the art, before the effective filing data of the claimed invention, to modify the processor 104 of Rudzicz so that it includes an audio version algorithm as described by Goodwin for the purpose of transcribing speech to text.
With respect to transmitting parsed text to a word database, Goodwin teaches microphones for recording audible speech. AI models, convert speech to text. A cloud server 122 analysis the text data using classification modules. Goodwin teaches data being captured and stored, see para. 51, lines 5-7. See also para. 52.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to combine the teaching of Goodwin with Rudzicz by using a speech to text converter for converting the text in such a way as to store the content in a database as suggested by Goodwin.
Regarding training an AI algorithm using the spoken word database, Rudzicz teaches using AI for identifying persons in the operating space. Goodwin also uses AI to extract and accumulate procedures in an operating or training room, see para. 23. At para. 23, Goodwin teaches using AI in order to locate and identify instruments used in the medical procedure.
At the bottom of page 35, Goodwin teaches using speech converted to text and that text enhancements are used in deep neural networks to achieve textual transcripts to identify the speaker. It is presumed that the text information is used in the AI algorithm for identifying the speaker who generated the text. Hence, it would have been obvious or recognized by one of ordinary skill in the art, before the effective filing date of the claimed invention, that in the same way that both Rudzicz and Goodwin use AI to identify heads of those in the operating space, similarly as suggested by Goodwin voice prints/ spoken words can be used in an AI training sequence to identify the voices of those corresponding to those identified in the operating space.
Since, Rudzicz and Goodwin are both directed to monitoring events in an operating room, using video recording, the purpose of converting the speech to words would have been recognized by Rudzicz as set forth by Goodwin.
. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to combine the teaching of Goodwin with Rudzicz by using a speech to text converter for converting the text in such a way as to store the content of text in a database as suggested by Goodwin.
With respect to claim 7, see the top of page 35 regarding at least the “pressure sensor”. The motivation for the rejection is the same as that to claim 6.
With respect to claim 8, Rudzicz teaches a communication interface 106, operable over a secure network, (see para. 122), for transmission and reception of voice, audio and patient information. See para 136 and para. 173 for video and audio. See para. 176 lines 9-10 regarding patient records. See also figure 1 of Rudzicz. The motivation for the rejection is the same as that to claim 6.
With respect to claim 9, Rudzicz teaches a plurality of cameras 108 located on a surgical mount for microphone 102 and tacking sensor receiver 104. See the middle of figure 3. The motivation for the rejection is the same as that to claim 6.
With respect to claim 10, Rudzicz teaches the display coordinates but does not state that they are chroma-keyed. T
However, Rudzicz teaches that a feature can be an individual characteristic of what is being observed, para. 128, lines 1-2. Moreover, Rudzicz teaches that the feature could be a specific color or shape. See also para. 128, lines 6-7. At para. 132, Pudzicz teaches a vector of bounding boxes (see line 5). Video operating platform 100 can detect approximately 40 heads at a time and can save the coordinates of all heads.
Rudzicz does not teach a chroma-key for each of the at least 40 bounding boxes.
Goodwin teaches an overlay of markers such that when an unauthorized person is in a prohibited zone, the user interface can initate a breach alter and change the color of a section of the video region from green to yellow or from green to red.
(See the bottom of para. 43).
The Examiner contends that it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to set a color for each of the distinct bounding boxes based on different characteristics of those heads detected by the platform, as suggested by para. 128 of Rudzicz.
With respect to claim 11, Rudzicz an annotative object, which is the blurred head of one performing a medical procedure or one who assists another in performing a medical procedure. The annotative object may also be one of many medical personnel whose head is analyzed by platform 100 to authenticate their presence in the operating or procedure room.
With respect to claim 12, Rudzicz teaches the use of an audio device but does not teach the specific use of a microphone. Goodwin teaches a microphone 102 for transcribing speech to text by feeding audio signals to an AI natural language processing model and audio voice transcription via deep neural networks and text enhancements.
Goodwin teaches that the cloud server 122 allows a computer administrator to review video and audio for providing metrics indicative of the quality of performance. At para. 46, Goodwin teaches a user interface 326 that allows an audio recorded medical event to be studied or reviewed according to a verbal bookmark (a speaker identifier and time stamp 906 – see para. 46, lines 3 and 4).
Since Rudzicz and Goodwin are both directed to recording platforms for recording audio signals in a surgical process, the purpose of identifying a verbal bookmark would have been recognized by Ruczicz as set forth by Goodwin.
It would have been obvious to one of ordinary skill in the art, to modify processor 104 so that an identifier or bookmark, generated with an interface unit can be used to identify sections of the recorded video, as taught by Goodwin.
With respect to claim 13, Goodwin teaches the creation of audio signals from microphone(s) 102, see para. 46.
Neither Goodwin nor Rudzicz specifically mention applicability in the insurance business.
However, Goodwin teaches an administrative console and a user interface 326 of the administrative console 126 for establishing alerts in real-time or to facilitate training as to what procedures should be avoided in the future. Goodwin teaches if a checklist had been followed, as well as room monitoring, dosage readings, and case information. See the lower portion of para. 40staff behaviors and events can be confirmed for compliance.
The admin console 126 serves as the means by which a medical insurance claim could be studied for compliance with insurance rules, such as number of people in the surgical area, the skill level of personnel in the room, whether equipment was dropped and contaminated, the visual indication of those and their identification and whether the procedure was actually performed.
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to use the combination of medical recording platforms taught by Rudzicz in view of Goodwin to allow verification of an insurance claim by reading the recorded results of the medical procedure to determine if all actions are in compliance with applicable insurance laws.
With respect to claim 14, Rudzicz in view of Goodwin teaches all of the subject matter upon which the claim depends except for the use of CPT codes.
Instead, Goodwin allows a user of a user interface to toggle various event tabs to allow audio and recorded video of procedures to be played back (see para. 42).
Therefore, in place of the CPT codes, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to replace or substitute the use of CPT codes with the access of the type of surgical procedure, the date, time, audio, video and surgical staff and the manner in which the surgical procedure was performed, see para. 40, beginning at line 12.
With respect to claim 15, Rudzicz teaches a computer implemented system for monitoring and recording procedure with real-time algorithms and video overlays.
Rudzicz teaches a platform 100 with a processor 104, for executing instructions in memory 108 for implementing the claimed functions as set forth herein.
a plurality of cameras ( para. 90, lines 1-2 and para. 91, lines 5-6) to capture visual data within the medical environment, see para. 91, lines 1-3).
Rudzicz teaches cameras with microphones (para. 166, line 10, see also para. 136, last three lines). The purpose of the microphones is to capture audio data of the surgical or monitoring process.
Rudzicz teaches a plurality of sensors 34, for measuring patient data, see para. 166, beginning at line 10
Rudzicz teaches a computer system, described at para. 55, lines 1-3, having a (processor 104 and a memory unit 110 for receiving patient data (for example, video data, see also para. 166) wherein the processing unit is configured to receive patient data from the visual data from cameras (see para. 90, line 1 and para. 127) and audio data from audio device.
Rudzicz teaches wherein the processing unit is instructed to execute video encoding processes, see for example, para. 165 line 1 regarding medical data encoder 22.
Rudzicz teaches a plurality of cameras for receiving an input image wherein the image is medical or surgically related, (see para. 91).
Rudzicz teaches a reference object, such as a feature vector identifying certain features, with a bounding box, within the surgical video. The features within the bounding box could be the head portions of those in the operating room performing medical services. This information has identification in the video stream for at least the reason that designated portions can be blurred. The reference object is stored on a database is suggested at the last few lines of para. 134. The identification of the video stream includes feature vectors that are transferred and held in a data centre (database as claimed).
Rudzicz teaches a user, via administrative user interface, for selecting one of 40 bounding boxes which identified the heads of those in the procedure room. Rudzicz teaches that at least one of the objects are generated as images of the heads of personnel are derived from the scans of cameras, see para. 91.
Rudzicz teaches overlay of a bounding box in proximity to the feature vectors which describe the objects a user might want identified, (i.e., the number of person in the operating room). The bounding box has display coordinates, see para. 132, lines 8 and 9.
Rudzicz teaches overlaying the operating room floorplan data structure with detected movement of individuals in between sterile and non-sterile fields, (see para. 104). Rudzicz teaches overlay of a bounding box in proximity to the feature vectors which describe the objects a user might want identified, (i.e., the number of person in the operating room). The bounding box has display coordinates, see para. 132, lines 8 and 9.
What is not taught by Rudzicz is the receiving audio with respect to one of the cameras with a microphone; converting the audio to text; transmitting the text to a word database and training an AI model using the word database.
With respect to receiving audio associated with a camera, Rudzicz teaches multiplexing sound with the video, para. 126 and the use of audio devices 132 in para. 176. Rudzicz does not define the audio devices use a microphone.
Goodwin teaches a plurality of cameras 106 and 108, as set forth in at least para. 22. At para. 28, Goodwin teaches video processing with the cameras and validating spatial movement by audio-processing where the audio signals are generated from microphone(s) 102. See also the bottom of para.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to substitute microphones as a means of audio devices for capturing speech through a video event as taught by Goodwin.
With respect to converting audio into parsed text, Goodwin teaches this feature as set forth in para. 35. Goodwin teaches “….transcribing speech to text by feeding audio signals from microphones 102 to an AI natural language processing model and audio voice transcription via deep neural networks and text enhancements.
With respect to transmitting parsed text to a word database, Goodwin teaches microphones for recording audible speech. AI models, convert speech to text. A cloud server 122 analysis the text data using classification modules. Goodwin teaches data being captured and stored, see para. 51, lines 5-7. See also para. 52.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to combine the teaching of Goodwin with Rudzicz by using a speech to text converter for converting the text in such a way as to store the content in a database as suggested by Goodwin.
Regarding training an AI algorithm using the spoken word database, Rudzicz teaches using AI for identifying persons in the operating space. Goodwin also uses AI to extract and accumulate procedures in an operating or training room, see para. 23. At para. 23, Goodwin teaches using AI in order to locate and identify instruments used in the medical procedure.
At the bottom of page 35, Goodwin teaches using speech converted to text and that text enhancements are used in deep neural networks to achieve textual transcripts to identify the speaker. It is presumed that the text information is used in the AI algorithm for identifying the speaker who generated the text. Hence, it would have been obvious or recognized by one of ordinary skill in the art, before the effective filing date of the claimed invention, that in the same way that both Rudzicz and Goodwin use AI to identify heads of those in the operating space, similarly as suggested by Goodwin voice prints/ spoken words can be used in an AI training sequence to identify the voices of those corresponding to those identified in the operating space.
Since, Rudzicz and Goodwin are both directed to monitoring events in an operating room, using video recording, the purpose of converting the speech to words.
With respect to claim 16, see the top of page 35 regarding at least the “pressure sensor”.
The motivation for the rejection is the same as to claim 15.
With respect to claim 17, Rudzicz teaches all of the subject matter upon which the claims depend, except for spoken word databased allowing a user to bookmark using a key word as claimed.
Goodwin teaches an administration interface 126 to invoke recording and storage of the live operating procedure - see para. 40, lines 4-13). Goodwin further teaches setting various data tabs that allow the viewing of designated events, see – para. 42. This includes the starting and ending points of the capture which the use may determine. Goodwin teaches that a user interface 326 can display speech transcripts – see para. 46. The transcripts are written text which have been converted or translated from speech. Hence, the spoken word, has been converted and stored. Goodwin teaches AI models, convert speech to text. A cloud server 122 analysis the text data using classification modules. Goodwin teaches data being captured and stored, see para. 51, lines 5-7. See also para. 52.
The user interface displays a speaker identity (a name or ID of the medical personnel) or a timestamp 906 for each spoken statement – see para. 46. Hence, the timestamp or the identification of the speaker serves as a bookmark for customizing the user’s ability to access relevant data using the timestamp or personnel identification. Goodwin also teaches (para. 42, line 4) that a user can “pin” anyone of the displays. Hence, the “pin” serves as a bookmark for selecting the relevant recordings of interest to the user, perhaps for designating the beginning of a sequence of video recordings that might be of interest to the user on the graphic interface 326.
Since Rudzicz and Goodwin are all directed to video recordings for a medical procedure, which allow a user to examine a medical procedure for compliance, the purpose of using a functioning bookmark would have been recognized by Rudzicz in view of Goodwin 778.
It would have been obvious to one of ordinary skill in the art, before the effective filing data of the claimed invention, to combine the teaching of using a book mark, taught by Goodwin, with the surgical platform of Rudzicz by using a graphic user interface to bookmark certain sections of the recording by using the word transcription into text and subsequently tagging or pinning the transcribed section as a bookmark for further study, as taught by Goodwin, which are of particular interest to the user.
With respect to claim 19, Rudzicz teaches locally securing data source 110, thru network 130 by means of a user interface 140 which must use security information as set forth in paras. 122 and 123.
Goodwin 778 teaches cloud archives (122 - see figure 1) is utilized by a secure network (124), see figure 1.
Since Rudzicz and Goodwin 778 are both directed to video recording platforms, the purpose of securing local and cloud archives would have been recognized by one of ordinary skill in the art upon combining the two teachings.
It would have been obvious to one of ordinary skill in the art, before the effective filing data of the claimed invention, to modify the network of Rudzicz so that cloud data is communicated over a secure internal network and the local information that is archived is accessed by means of secure passwords as set forth by Rudzicz.
With respect to claim 20, Rudzicz teaches image that is output is timestamped ( see para. 176, lines 11-13) and anonymized by generating de-identifed video data by using a blurred video data for regions of the video where the heads of personnel are detected, see para. 87, lines 4-7. The motivation for this rejection is the same as that to claim 15.
Allowable Subject Matter
Claim 18 is allowed as the prior art does not teach generating a live and synthetic view in which a computer uses the two views to compare characteristics of parameters stored in the online database to detect anomalous activity.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JEROME GRANT II whose telephone number is (571)272-7463. The examiner can normally be reached M-F 9:00 a.m. - 5:00 p.m..
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jennifer Mehmood can be reached at 571-272-2976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JEROME GRANT II/Primary Examiner, Art Unit 2664