DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1, 2, 4, 5, 7 and 8 are is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Barral et al. (2019/0110856).
Regarding claim 1, Barral teaches a video editing device comprising: a memory configured to store instructions; and a processor configured to execute the instructions (para [0151] In one example, a device may include a processor or processors. The processor comprises a computer-readable medium, such as a random access memory (RAM) coupled to the processor. The processor executes computer-executable program instructions stored in memory, such as executing one or more computer programs) to: acquire an endoscopic video taken by an endoscope (para [0044] Endoscope 251 is coupled to computing device 207 to output surgical video 265 to computing device 207.); acquire a first timing at which a lesion is detected from the endoscopic video (para [0030] a machine-learning (“ML”) technique, to quickly bookmark points within a surgical video, share those bookmarks, excerpt video(s) from the full video using the bookmarks, to identify the type of surgical procedure and the steps, sub-steps, or events of interest within the surgical procedure. Para [0047] For example, if a bleeding event is detected, the computing device 207 may increase a sampling rate to process more frames likely to capture the bleeding event and any responses from the surgeon to the event. These frames may be processed to potentially capture other events or sub-steps occurring during the event. Alternatively, rather than processing such frames in real-time, the computing device may annotate one or more frames with metadata indicating the frames should be processed after the surgical procedure has been completed.); acquire a second timing at which an examiner instructs photographing based on the endoscopic video (para [0048] In some examples, the detected step of the surgical procedure may be presented to the surgeon on the display for informational purposes. Further, in some examples, the surgeon may provide one or more inputs confirming that the detected step is in fact occurring,); and generate an edited video by extracting, from the endoscopic video, a partial video of a predetermined time period before and after a timing corresponding to at least one of the first timing and the second timing, and output the edited video (para [0047] Such annotations may include annotations on frames preceding the detected interesting feature since, the interesting feature may have begun prior to the frame in which it was detected. Thus, the computing device may generate annotated windows around potential events in real-time, but save the computationally expensive processing until after the surgical procedure has been completed. para [0082] Alternatively to selecting the “extract individual video segments” option 910, the user may select the “extract single video” option 920. In this example, such a selection would cause the system 100 to generate a single video file including all video frames beginning at bookmark 422b and ending at bookmark 422d. If non-consecutive bookmarks are selected, e.g., 422a, 422c, and 422d, some example systems may extract video frames between bookmark 422a and 422b as well as between 422c and 422d and store them in a single video file. Or some examples may treat each selected bookmark as a selected segment, thus selecting bookmarks 422a, 422c, and 422d may cause some systems to extract all frames between bookmarks 422a-422b, 422c-422d, and 422d-422e.).
Regarding claim 2, Barral teaches the video editing device according to claim 1, wherein the processor generates the edited video by extracting the partial video for the timing corresponding to the first timing and not corresponding to the second timing (para [0057] In this example, the GUI 500 allows the user to supply names for each bookmark 422a-e, edit the position of each bookmark 422a-e within the video, see the exact timestamp associated with each bookmark, add or delete bookmarks, or specify the type of surgical procedure 524. Para [0082] In this example, such a selection would cause the system 100 to generate a single video file including all video frames beginning at bookmark 422b and ending at bookmark 422d. ) (Note: para [0082] describes generating single video based on selected bookmarks. And para [0057] describes add or delete bookmarks. Thus when bookmark is deleted which corresponds to examiner; the single video generated will not correspond to the second timing).
Regarding claim 4, Barral teaches the video editing device according to claim 1, wherein the processor is further configured to divide the endoscopic video into plural types of sections based on whether or not the endoscopic video corresponds to the first timing and the second timing (para [0030] To reduce the burden on surgeons who might otherwise be required manually process their own video, or other surgical videos they access, an illustrative system enables various techniques to allow the surgeon, or a machine-learning (“ML”) technique, to quickly bookmark points within a surgical video, share those bookmarks, excerpt video(s) from the full video using the bookmarks, to identify the type of surgical procedure and the steps, sub-steps, or events of interest within the surgical procedure. Such techniques may allow surgeons to more efficiently review surgical videos, improve their surgical techniques, assess errors and corrective actions or training needs, or search a corpus of videos for specific portions of surgical videos, such as for specific steps of a particular type surgical procedure or events that occurred during one or more surgical procedures. Para [0033]), and display the plural of types of sections as options (para [0035] In addition to navigating directly to a video segment of interest, the bookmarks may also be used to further manipulate the video. For example, if the surgeon wishes to share a segment of the video with a colleague, she can select a bookmarks, or multiple bookmarks, and select a “share” option to generate a message to the colleague including a reference to the video and the bookmark. The message would then allow the recipient to jump to the bookmarked location within the video.), wherein the processor generates the edited video by extracting the partial video belonging to the section of the type selected by a user from among the plural types of sections (para [0036] Alternatively, the surgeon can select one or more segments of video by selecting the corresponding bookmarks and selecting an option to extract the segment(s) of video.).
Regarding claim 5, Barral teaches the video editing device according to claim 1, wherein the processor generates the edited video by concatenating the partial videos in a time series (para [0053] The video timeline 412 is shown with a cursor 414 showing the current frame of the video 471 in the timeline, which can be used to scrub through the video 471.; Fig 5; para [0082] Alternatively to selecting the “extract individual video segments” option 910, the user may select the “extract single video” option 920. In this example, such a selection would cause the system 100 to generate a single video file including all video frames beginning at bookmark 422b and ending at bookmark 422d. If non-consecutive bookmarks are selected, e.g., 422a, 422c, and 422d, some example systems may extract video frames between bookmark 422a and 422b as well as between 422c and 422d and store them in a single video file).
Regarding claim 7, Barral teaches a video editing method comprising: acquiring an endoscopic video taken by an endoscope (para [0044] Endoscope 251 is coupled to computing device 207 to output surgical video 265 to computing device 207.); acquiring a first timing at which a lesion is detected from the endoscopic video (para [0030] a machine-learning (“ML”) technique, to quickly bookmark points within a surgical video, share those bookmarks, excerpt video(s) from the full video using the bookmarks, to identify the type of surgical procedure and the steps, sub-steps, or events of interest within the surgical procedure. Para [0047] For example, if a bleeding event is detected, the computing device 207 may increase a sampling rate to process more frames likely to capture the bleeding event and any responses from the surgeon to the event. These frames may be processed to potentially capture other events or sub-steps occurring during the event. Alternatively, rather than processing such frames in real-time, the computing device may annotate one or more frames with metadata indicating the frames should be processed after the surgical procedure has been completed.); acquiring a second timing at which an examiner instructs photographing based on the endoscopic video (para [0048] In some examples, the detected step of the surgical procedure may be presented to the surgeon on the display for informational purposes. Further, in some examples, the surgeon may provide one or more inputs confirming that the detected step is in fact occurring,); generating an edited video by extracting, from the endoscopic video, a partial video of a predetermined time period before and after a timing corresponding to at least one of the first timing and the second timing; and outputting the edited video (para [0047] Such annotations may include annotations on frames preceding the detected interesting feature since, the interesting feature may have begun prior to the frame in which it was detected. Thus, the computing device may generate annotated windows around potential events in real-time, but save the computationally expensive processing until after the surgical procedure has been completed. para [0082] Alternatively to selecting the “extract individual video segments” option 910, the user may select the “extract single video” option 920. In this example, such a selection would cause the system 100 to generate a single video file including all video frames beginning at bookmark 422b and ending at bookmark 422d. If non-consecutive bookmarks are selected, e.g., 422a, 422c, and 422d, some example systems may extract video frames between bookmark 422a and 422b as well as between 422c and 422d and store them in a single video file. Or some examples may treat each selected bookmark as a selected segment, thus selecting bookmarks 422a, 422c, and 422d may cause some systems to extract all frames between bookmarks 422a-422b, 422c-422d, and 422d-422e.).
Regarding claim 8, Barral teaches a non-transitory computer-readable recording medium recording a program (para [0151] In one example, a device may include a processor or processors. The processor comprises a computer-readable medium, such as a random access memory (RAM) coupled to the processor. The processor executes computer-executable program instructions stored in memory, such as executing one or more computer programs), the program causing a computer to execute processing of (para [0152]): acquiring an endoscopic video captured by an endoscope (para [0044] Endoscope 251 is coupled to computing device 207 to output surgical video 265 to computing device 207.); acquiring a first timing at which a lesion is detected from the endoscopic video (para [0030] a machine-learning (“ML”) technique, to quickly bookmark points within a surgical video, share those bookmarks, excerpt video(s) from the full video using the bookmarks, to identify the type of surgical procedure and the steps, sub-steps, or events of interest within the surgical procedure. Para [0047] For example, if a bleeding event is detected, the computing device 207 may increase a sampling rate to process more frames likely to capture the bleeding event and any responses from the surgeon to the event. These frames may be processed to potentially capture other events or sub-steps occurring during the event. Alternatively, rather than processing such frames in real-time, the computing device may annotate one or more frames with metadata indicating the frames should be processed after the surgical procedure has been completed.); acquiring a second timing at which an examiner instructs photographing based on the endoscopic video (para [0048] In some examples, the detected step of the surgical procedure may be presented to the surgeon on the display for informational purposes. Further, in some examples, the surgeon may provide one or more inputs confirming that the detected step is in fact occurring,); generating an edited video by extracting, from the endoscopic video, a partial video of a predetermined time period before and after a timing corresponding to at least one of the first timing and the second timing; and outputting the edited video (para [0047] Such annotations may include annotations on frames preceding the detected interesting feature since, the interesting feature may have begun prior to the frame in which it was detected. Thus, the computing device may generate annotated windows around potential events in real-time, but save the computationally expensive processing until after the surgical procedure has been completed. para [0082] Alternatively to selecting the “extract individual video segments” option 910, the user may select the “extract single video” option 920. In this example, such a selection would cause the system 100 to generate a single video file including all video frames beginning at bookmark 422b and ending at bookmark 422d. If non-consecutive bookmarks are selected, e.g., 422a, 422c, and 422d, some example systems may extract video frames between bookmark 422a and 422b as well as between 422c and 422d and store them in a single video file. Or some examples may treat each selected bookmark as a selected segment, thus selecting bookmarks 422a, 422c, and 422d may cause some systems to extract all frames between bookmarks 422a-422b, 422c-422d, and 422d-422e.).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 3 is/are rejected under 35 U.S.C. 103 as being unpatentable over Barral et al. (2019/0110856) in view of Iampietro et al. (2014/0289594).
Regarding claim 3, Barral teaches the video editing device as explained for claim 1 above.
Barral fails to teach, wherein the processor is further configured to perform qualitative determination for the partial video of the first timing and the second timing, wherein the processor generates the edited video by extracting the partial video on which the qualitative determination is performed; as claimed.
Iampietro teaches a media editing system, comprising: a processor (104; Fig 1) is further configured to perform qualitative determination for the partial video of the first timing and the second timing, wherein the processor generates the edited video by extracting the partial video on which the qualitative determination is performed (para [0017] user interface module 122 configures computing system 102 to provide a user interface for editing footage of a media presentation. The footage may comprise one or more portions having video and/audiovisual content. Para [0019] Qualitative evaluation module 126 configures computing system 102 to access metadata associated with the media presentation and use the metadata to determine a qualitative score for at least one video segment of the media presentation as a measure of how interesting the segment is. In some embodiments, a qualitative score is determined for each frame of the media presentation by accumulating score values for various factors identified from an analysis of metadata associated with the frame. Para [0039] Block 308 represents removing one or more portions or otherwise editing the media presentation based on comparing the scores of respective segments to a parameter or parameters. For example, the score for a segment may be compared to a threshold score and, if the score is below the threshold, the segment may be identified for removal.).
It would have been obvious to one of ordinary skill in the art before the filing date of present application to have modified the device of Barral with the teachings of Iampietro, because this will result in generating video with only include relevant information thus streamline the editing process.
Claim(s) 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over Barral et al. (2019/0110856) in view of Peker (2007/0201817).
Regarding claim 6, Barral teaches the video editing device as explained for claim 1 above.
Barral fails to teach, wherein the processor generates the edited video in which the partial video is reproduced at a normal speed and a part of the endoscopic video other than the partial video is reproduced at a higher speed than the normal speed; as claimed.
Peker teaches a video playback system, generates video in which partial video is reproduced at a normal speed and a part of the video other than the partial video is reproduced at a higher speed than the normal speed (para [0009] The segmented video 121 can then be played back 150 so that the summary segments 121 are played back at a normal speed, and the skipped segments are played back at a speed corresponding to the complexity level. For example, the play back speed of the skipped segments is slow when the visual complexity is high, and fast when the visual complexity is low. That is, the play back is adaptive to the content of the video.).
It would have been obvious to one of ordinary skill in the art before the filing date of present application to have modified the device of Barral with the teachings of Peker, because this will result in generating endoscopic video segments with varying speed such that the relevant segment can be viewed/analyzed while other non-relevant segment can be generated at higher speed, thus saving user time and improving user experience.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to PREMAL PATEL whose telephone number is (571)270-5892. The examiner can normally be reached Mon-Fri 8-5.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, MATTHEW EASON can be reached at 571-270-7230. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/PREMAL R PATEL/Primary Examiner, Art Unit 2624