DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
1. Regarding the rejections under 35 U.S.C. § 103, Applicant's arguments filed 10/23/2025 have been fully considered but they are not persuasive.
Applicant argues in the Remarks that the Zahavi reference does not specifically disclose that the one or more transcribed context portions are “based on a content of the user comment” (see pgs. 9-10), as it the user input “does not refer to the ‘content’ or actual words in the user comment…Instead, Zahavi refers to the time at which the user has provided an input…” (see page 10, 1st para.). The Examiner respectfully disagrees that Zahavi does not teach this claimed limitation. This limitation is recited broadly, and under the broadest reasonable interpretation, encompasses obtaining the context portions based on any form of content in the user comment (this includes content such as particular alphanumeric symbol(s) present in the user comment). Furthermore, the limitation as claimed does not specify how this content is used to obtain the transcribed context portion, and only requires that the content in the user comment be used in some way to obtain the transcribed context portion. Zahavi teaches obtaining transcribed context portions (identifying relevant sections of a transcript for a question) based on the content of the user comment (e.g. based on determining the content in the user comment (an initial marking “#” is followed by a “?” input by the user), performing the identification of transcribed context portions). This operation reads on obtaining the one or more transcribed context portions “based on a content of the user comment”.
Hence, Applicant’s arguments are not persuasive.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
2. Claims 1-3, 5-11, 13-17, and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Roedel in view of Zahavi et al. (US PGPUB No. 2024/0013158, hereinafter Zahavi).
Regarding claim 1, Roedel discloses A method comprising: obtaining online conference data from an online conference (Fig. 1, 151 “Code/Data”; Col. 6 Lines 45-47 “The applications 141 may receive and send code/data 151. In some configurations, the code/data 151 can be in the form of text, images, media or any other form of data.”) between a plurality of user endpoints (Fig. 1, 106A-N “Computing Device”), wherein at least one of a meeting server connected to the plurality of user endpoints and a first user endpoint of the plurality of user endpoints is arranged to coordinate the online conference and provide the online conference data (Col. 17 Lines 14-37 “…the device(s) 610 of the system 602 include a server module 630 and an output module 632. In this example, the server module 630 is configured to receive, from individual client computing devices such as client computing devices 606(1) through 606(N), media streams 634(1) through 634(N)… Thus, the server module 630 is configured to receive a collection of various media streams 634(1) through 634(N) during a live viewing of the communication session 604 (the collection being referred to herein as “media data 634”).”; Col. 17 Lines 41-44 “Consequently, the server module 630 may be configured to generate session data 636 based on the streams 634 and/or pass the session data 636 to the output module 632.”); providing, by a user device in communication with at least one of the meeting server and the first user endpoint (Col. 17-18 Lines 62-67 and Lines 1-13 “…the device(s) 610 and/or the client module 620 can include GUI presentation module 640. The GUI presentation module 640 may be configured to analyze communication data 639 that is for delivery to one or more of the client computing devices 606…The presentation GUI 646 may be caused to be rendered on the display screen 629(1) by the GUI presentation module 640. The presentation GUI 646 may include the video, image, and/or content analyzed by the GUI presentation module 640.”), an output of the online conference data to a user interface of the user device (Col. 7 Lines 7-13 “The code/data 151 can be communicated to any number of computing devices 106, referred to herein as computing devices 106B-106N, from a first computing device 106A or the service 110 via a network 108. Each computing device 106B-106N associated with a recipient can display the code/data 151 on a user interface 195 (195A-195N) by the use of a viewing application 142.”; Fig. 2A shows example user interface); obtaining, at the user device (Col. 7, Lines 10-13 “Each computing device 106B-106N associated with a recipient can display the code/data 151 on a user interface 195 (195A-195N) by the use of a viewing application 142.”; Col. 3, Lines 23-24 “FIG. 2B illustrates an example user interface in accordance with an embodiment”), a user comment submitted by a user from the user interface of the user device (Fig. 2B, 299; Fig. 2C, 280; Col 8. Lines 26-28 “FIG. 2C illustrates that in response to entering a message in entry pane 299, the typed message 280 may be rendered.”); obtaining one or more transcribed context portion elements of the online conference data (Col. 6 Lines 60-67 “The caption/transcription service 180 may generate captions or transcriptions for systems and devices via applications 141 and can send, for example, a caption or transcription to a computing device 106A…The computing device 106A may receive the captions or transcripts for presentation on UI 190A-190N.”) at the user interface (Fig. 2A, one or more transcribed context portion elements (241-244) displayed in pane 230)…wherein the one or more transcribed context portions are obtained from a transcription service incorporated in at least one of the meeting server and the first user endpoint, wherein the one or more transcribed context portion elements are generated by the transcription service… (Col. 6 Lines 60-67 “The caption/transcription service 180 may generate captions or transcriptions for systems and devices via applications 141 and can send, for example, a caption or transcription to a computing device 106A…The computing device 106A may receive the captions or transcripts for presentation on UI 190A-190N.”; Fig. 6: “Server Module 630” determines “Session Data 636” to pass to “Output Module 632” for output at a user device GUI; Col. 17 Lines 41-52 “Consequently, the server module 630 may be configured to generate session data 636 based on the streams 634 and/or pass the session data 636 to the output module 632. Then, the output module 632 may communicate communication data 639 to the client computing devices (e.g., client computing devices 606(1) through 606(3) participating in a live viewing of the communication session). The communication data 639 may include video, audio, and/or other content data, provided by the output module 632 based on content 650 associated with the output module 632 and based on received session data 636.”; The user GUI includes transcription data: Fig. 2D, “Transcript Pane 230”); selecting, by the user at the user interface, at least a first transcribed context portion element from the one or more transcribed context portion elements (Col. 7 Lines 54-61 “While viewing the meeting, a user may determine that a chat message would be helpful with the meeting participants, with reference to message 243. Rather than copying or typing the transcript contents and opening a chat pane, the user may hover over message 243. When hovering over message 243, a function pane 245 may be rendered with selectable options. The user may click the reply icon (arrow).”; Fig. 2A, user selects icon (245) for one of the transcribed context portion elements (one of Message 241-244)); generating, by the user device (Col. 3, 25-26 “FIG. 2C illustrates an example user interface in accordance with an embodiment.”), an annotated comment comprising the user comment and the at least first transcribed context portion element (Fig. 2C, 298 and 280); and adding the annotated comment from the user device to the online conference data (Col. 9 Lines 59-67 “…the transcript of the meeting may highlight or otherwise include a visual indication to indicate that that the quoted portion of the transcript a chat message pertains to an associated chat message…”).
Roedel does not specifically disclose:
[obtaining one or more transcribed context portion elements of the online conference data at the user interface] in response to obtaining the user comment
[wherein the one or more transcribed context portion elements are] based on a content of the user comment.
Zahavi teaches obtaining a transcribed context portion of an online conference data in response to obtaining the user comment (In response to a user providing a comment (inputting the user comment “#?”), a portion of the transcript is identified as context (a portion of transcript is identified as being the question the user wishes to tag): para. 0075 “…Continuing the example above, the user inputting a symbol “#” serves as the initial marking or wake-word indicating that the user is inputting an annotation that should be included as an event of interest. In this example, the annotations determiner 266 associates a timing when the user input the symbol “#” with a time during the meeting or meeting recording. In one embodiment, the timing when the user input the initial marking or wake-word corresponds to an event time associated with the event of interest. By associating the time during the meeting or meeting recording during which the user input the “#” symbol, the annotations determiner 266 may analyze the time in the meeting preceding or following the user input to determine an event of interest. For example, a user watching a meeting recording inputs “#?” at 20 minutes into the meeting recording, where “?” is the subsequent marking or word indicating that the type of event of interest is a question. Based on the “#?” symbol, the annotations determiner 266 automatically analyze the meeting recording (for example, transcript) to identify a question being asked as discussed above. In some embodiments, the initial marking (or wake-word) or the subsequent marking or word is predefined and can be specified by the user or administrator, and stored in user profile 240.”). Zahavi further teaches [wherein the one or more transcribed context portion elements are] based on a content of the user comment (para. 0075; In response to analyzing the content of the text (“#?”), the text suggests that the service search for and identify a question for which the user comment is associated with).
Roedel and Zahavi are considered to be analogous to the claimed invention as
they both are in the same field of online communication sessions. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Roedel to incorporate the teachings of Rahavi in order to obtain the transcribed context portion in response to the obtaining the user comment. Doing so would be beneficial, as it would make generating annotated comments more efficient, as it would reduce the amount of time spent by the user manually searching for and selecting a particular transcribed context, and instead detect it automatically. Furthermore, it would have been obvious to have the transcription context portion element be based on a content of the user comment. Doing so would be beneficial, as this would allow for the automatic generation of annotated comments to yield more desirable results, as the selected context portion elements would be relevant to the user comment, improving user experience.
Regarding claim 2, Roedel in view of Zahavi discloses The method of claim 1, wherein the one or more transcribed context portion elements based on the content of the user comment include at least one suggested context portion for use to annotate the user comment (Zahavi discloses obtaining a transcribed context portion based on a content of the user comment: para. 0075 “…Continuing the example above, the user inputting a symbol “#” serves as the initial marking or wake-word indicating that the user is inputting an annotation that should be included as an event of interest. In this example, the annotations determiner 266 associates a timing when the user input the symbol “#” with a time during the meeting or meeting recording. In one embodiment, the timing when the user input the initial marking or wake-word corresponds to an event time associated with the event of interest. By associating the time during the meeting or meeting recording during which the user input the “#” symbol, the annotations determiner 266 may analyze the time in the meeting preceding or following the user input to determine an event of interest. For example, a user watching a meeting recording inputs “#?” at 20 minutes into the meeting recording, where “?” is the subsequent marking or word indicating that the type of event of interest is a question. Based on the “#?” symbol, the annotations determiner 266 automatically analyze the meeting recording (for example, transcript) to identify a question being asked as discussed above. In some embodiments, the initial marking (or wake-word) or the subsequent marking or word is predefined and can be specified by the user or administrator, and stored in user profile 240.”; Roedel discloses annotating a user comment with a suggest context portion: Col. 10 Lines 37-39 “While many of the examples are illustrated using a single transcription quote or caption, the user may be provided a way to select a portion of a quote or multiple quotes”; see Fig. 2C, user comment (280) annotated with context portion (298)).
Roedel and Zahavi are considered to be analogous to the claimed invention as
they both are in the same field of online communication sessions. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Roedel to incorporate the teachings of Zahavi in order to obtain the transcribed context portion based on the content of the user. Doing so would be beneficial, as this would allow for the automatic generation of annotated comments to yield more desirable results, as the selected context portion elements would be relevant to the user comment, improving user experience.
Regarding claim 3, Roedel in view of Zahavi discloses selecting an earlier time frame or a later time frame of the online conference data from which the one or more transcribed context portion elements are generated to include at least one suggest context portion based on the content of the user comment for use to annotate the user comment (Zahavi discloses obtaining a transcribed context portion based on a content of the user comment (see claim 2 for mapping); Roedel, Fig. 2A, 230 and 241-244; With respect to Message 243, Transcript Pane 230 provides functionality for both selecting a transcribed context portion from an earlier time frame (e.g. Message 242) and selecting a transcribed context portion from a later time frame (e.g. Message 244) of the online conference).
Roedel and Zahavi are considered to be analogous to the claimed invention as
they both are in the same field of online communication sessions. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Roedel to incorporate the teachings of Zahavi in order to obtain the transcribed context portion based on the content of the user. Doing so would be beneficial, as this would allow for the automatic generation of annotated comments to yield more desirable results, as the selected context portion elements would be relevant to the user comment, improving user experience.
Regarding claim 5, Roedel in view of Zahavi discloses further comprising adjusting the at least first transcribed context portion element based on the user comment obtained at the user device, wherein adjusting the at least first transcribed context portion element includes increasing or decreasing a length of the at least first transcribed context portion element (Zahavi discloses obtaining a transcribed context portion based on a content of the user comment (see claim 2 for mapping); Roedel, Col. 10 Lines 37-45 “While many of the examples are illustrated using a single transcription quote or caption, the user may be provided a way to select a portion of a quote or multiple quotes. For example, a user may select a series of quotes from one or more speakers, and initiate a chat message based on the selected quotes…As another example, the user may select a portion of a quote and initiate a chat message based on the selected portion.”).
Roedel and Zahavi are considered to be analogous to the claimed invention as
they both are in the same field of online communication sessions. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Roedel to incorporate the teachings of Zahavi in order to obtain the transcribed context portion based on the content of the user. Doing so would be beneficial, as this would allow for the automatic generation of annotated comments to yield more desirable results, as the selected context portion elements would be relevant to the user comment, improving user experience.
Regarding claim 6, Roedel in view of Zahavi discloses The method of claim 1, wherein the user comment is received after a conclusion of the online conference (Roedel, Col. 4 Lines 61-65 “In some embodiments, the reply to/quoting the online meeting audio/video conversation with the chat message may be initiated after the communications session, from captions or transcripts that are accessible from the saved meeting data.”)
Regarding claim 7, Roedel in view of Zahavi discloses The method of claim 1, wherein the user device is among the plurality of user endpoints (Roedel, Fig. 1 Computing Device 106A; Col. 6 Lines 29-32 “In this example, a user can interact with an individual application 141 to launch and participate in applications such as a communications session and send and receive messages.”).
Regarding claim 8, Roedel in view of Zahavi discloses The method of claim 1, wherein generating the annotated comment includes adding metadata from the online conference data about the at least first transcribed context portion element (Roedel, Fig. 2B 298; Col. 8 Lines 19-21 “Additionally, message 298 includes a time and/or date stamp to indicate the time/date of the original quote.”).
Regarding claim 9, claim 9 is a apparatus claim with limitations similar to method claim 1, and is thus rejected for similar rationale.
Additionally, Roedel discloses An apparatus (Fig 6. 606(1) “Client Computing Device) comprising: a network interface configured to communicate with computing devices in a computer network (Fig. 6 624; Col. 16 Lines 31-36 “Client computing device(s) 606(1) through 606(N)…may also include one or more interface(s) 624 to enable communications between client computing device(s) 606(1) through 606(N)…”); a user interface configured to interact with a user of the apparatus (Fig. 6 626 “I/O Device(s)” and 629(1) “Display Screen”; Col. 16 Lines 40-54 “…client computing device(s) 601(1) through 606(N) can include input/output (“I/O”) interfaces (devices) 626 that enable communications with input/output devices such as user input devices…”, “…client computing device 606(1) is in some way connected to a display device (e.g., a display screen 629(1)), which can display a UI…”); and a processor coupled to the network interface and the user interface (Fig. 6 692 “Processing Unit(s)”; Processor coupled to network interface 624 and user interface 626 via Bus 616), the processor configured to (Col. 12, Lines 34-47).
Regarding claim 10, claim 10 is rejected for analogous reasons to claim 2.
Regarding claim 11, claim 11 is rejected for analogous reasons to claim 3.
Regarding claim 13, claim 13 is rejected for analogous reasons to claim 7.
Regarding claim 14, claim 14 is rejected for analogous reasons to claim 8.
Regarding claim 15, claim 15 is a non-transitory computer readable storage media claim with limitations similar to method claim 1, and is thus rejected under similar rationale.
Additionally, Roedel discloses One or more non-transitory computer readable storage media encoded with software comprising computer executable instructions that, when the software is executed on a user device, is operable to cause a processor of the user device to (Col. 12 Lines 34-47 “It should be appreciated that the above-described subject matter may be implemented as a computer-controlled apparatus, a computer process, a computing system, or as an article of manufacture such as a computer-readable storage medium…In the context of software, the operations represent computer-executable instructions stored on one or more computer-readable media that, when executed by one or more processors, enable the one or more processors to perform the recited operations.”).
Regarding claim 16, claim 16 is rejected for analogous reasons to claim 2.
Regarding claim 17, claim 17 is rejected for analogous reasons to claim 3.
Regarding claim 19, claim 19 is rejected for analogous reasons to claim 6.
Regarding claim 20, claim 20 is rejected for analogous reasons to claim 8.
3. Claims 4, 12, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Roedel in view of Zahavi, and further in view of Stoker et al. (PGPUB No. 2020/0321007, hereinafter Stoker).
Regarding claim 4, Roedel in view of Zahavi discloses further comprising adjusting the at least first transcribed context portion element based on the user comment obtained at the user device (Zahavi discloses obtaining a transcribed context portion based on a content of the user comment (see claim 2 for mapping); Roedel, Col. 10 Lines 37-45 “While many of the examples are illustrated using a single transcription quote or caption, the user may be provided a way to select a portion of a quote or multiple quotes. For example, a user may select a series of quotes from one or more speakers, and initiate a chat message based on the selected quotes…As another example, the user may select a portion of a quote and initiate a chat message based on the selected portion.”).
Roedel and Zahavi are considered to be analogous to the claimed invention as
they both are in the same field of online communication sessions. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Roedel to incorporate the teachings of Zahavi in order to obtain the transcribed context portion based on the content of the user. Doing so would be beneficial, as this would allow for the automatic generation of annotated comments to yield more desirable results, as the selected context portion elements would be relevant to the user comment, improving user experience.
Roedel in view of Zahavi does not specifically disclose wherein adjusting the at least first transcribed context portion element includes editing text in the at least first transcribed context portion element.
Stoker teaches a method for interacting with a real-time audio transcription of a video conference (Stoker, Paragraph 0067 Lines 1-13) in which a user can obtain a transcribed context portion (Stoker, Fig. 7 210; Paragraph 0071 Lines 1-6) and adjust a transcribed context portion, wherein adjusting the at least first transcribed context portion element includes editing text in the at least first transcribed context portion element (Stoker, Fig. 7 212; Paragraph 0072 “…the user may utilize an editing bar 212 to edit the results as necessary, for example to correct incorrectly transcribed words, supply missing or inaudible words, or to provide context and clarity as necessary.”; Paragraph 0073 “…The edits made to the transcription in the transcription bar 210 may include any suitable formatting or effects, such as style, comments, key words, highlighting, font, color, weight, size, and other effects…”).
Roedel, Zahavi, and Stoker are considered to be analogous to the claimed invention as they both are all in the same field of online communication sessions. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Roedel in view of Zahavi to incorporate the teachings of Stoker in order to enable a user to adjust a transcribed context portion by editing text in the transcribed context portion. Doing so would allow for users to correct errors in the transcription, improving experience for participants (Stoker, Paragraph 0071 Lines 6-12).
Regarding claim 12, claim 12 is rejected for analogous reasons to claim 4.
Regarding claim 18, claim 18 is rejected for analogous reasons to claim 4.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Decrop et al. (US 2022/0012298 A1): performing linguistic analysis on comment and transcript, determining timestamp which corresponds to user comment (para. 0027-0035, Fig. 2)
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CODY DOUGLAS HUTCHESON whose telephone number is (703)756-1601. The examiner can normally be reached M-F 8:00AM-5:00PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Pierre-Louis Desir can be reached at (571)-272-7799. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CODY DOUGLAS HUTCHESON/Examiner, Art Unit 2659
/PIERRE LOUIS DESIR/Supervisory Patent Examiner, Art Unit 2659