Prosecution Insights
Last updated: April 17, 2026
Application No. 18/175,794

SYNCHRONOUS WIDGET AND A SYSTEM AND METHOD THEREFOR

Final Rejection §103
Filed
Feb 28, 2023
Examiner
VU, NGOC K
Art Unit
2421
Tech Center
2400 — Computer Networks
Assignee
unknown
OA Round
4 (Final)
72%
Grant Probability
Favorable
5-6
OA Rounds
3y 11m
To Grant
85%
With Interview

Examiner Intelligence

Grants 72% — above average
72%
Career Allow Rate
181 granted / 253 resolved
+13.5% vs TC avg
Moderate +14% lift
Without
With
+13.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 11m
Avg Prosecution
15 currently pending
Career history
268
Total Applications
across all art units

Statute-Specific Performance

§101
4.9%
-35.1% vs TC avg
§103
46.5%
+6.5% vs TC avg
§102
18.4%
-21.6% vs TC avg
§112
17.4%
-22.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 253 resolved cases

Office Action

§103
Response to Arguments Applicant's arguments filed 01/06/2026 have been fully considered but they are not persuasive. Applicant mainly argues that “visual element” in the Office Action does not correspond to the claimed “widget” with respect to claim 1. This argument is not persuasive. Gawande discloses that a user of a user device may share a screen or other content during a communication session. For example, user interface as shown in FIG. 6 comprises a shared screen or shared content window enabling users to view visual data shared by a user during a communication session. Visual data shared during the communication session via a shared screen or shared content window of a user device may be analyzed by a research assistant application. The research assistant application then provides the research information relating to the topic of the visual data. See FIGs 1, 3 & 6, 0041, 0043, 0059, 0060. It is noted that claim terms are to be given their broadest reasonable interpretation, as understood by those of ordinary skill in the art and taking into account whatever enlightenment may be had from the Specification. In re Morrris, 127 F.3d 1048, 1054 (Fed. Cir. 1997). According to applicant’s specification and also indicated by applicant on page 10 in remarks, the widget can include an element of a GUI that displays information. In this view, the “visual element” refers to a shared screen or shared content window that is a component of a user interface for displaying a shared content as taught by Gawande. For example, a user interface as illustrated in FIG. 6 in the Gawande reference comprises a shared screen or shared content window presenting a shared content such as an image comprising a piano, a guitar and a drum. Therefore, the examiner’s interpretation of “visual element” as a shared screen or shared content window presenting a shared content in Gawande reasonably equates to the claimed ”widget”. Next, applicant has not persuasively distinguished the limitations of “wherein the synchronous widget…in the live share session” recited in claim 1 as taught or suggested by the Gawande reference. Particularly, Gawande teaches or suggests the features “wherein the synchronous widget includes the details of the interaction, including the current real-time state of the widget” (the visual element of the screenshare includes the presentation of the research information in parallel with the current shared content in parallel - See FIGs. 1 & 6, 0040-0042 0046, 0056, 0060), such that when the second participating computing device joins the live share session the widget is presented in said current real-time state independent of changes to the audio-video content in the live share session (as the second user joins the live communication session, the visual element displays the shared content and the researched information in the real time state independent of changes to the image/voice of the first user in the live communication session, e.g., due to muting or enabling/disabling camera/webcam for a moment from the first user device, since the image/voice of the first user separately presents in a reduced size window - See FIGs 1, 3 and 6, 0036, 0040, 0046, 0060). In response to applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). In response to applicant's argument that the examiner's conclusion of obviousness is based upon improper hindsight reasoning, it must be recognized that any judgment on obviousness is in a sense necessarily a reconstruction based upon hindsight reasoning. But so long as it takes into account only knowledge which was within the level of ordinary skill at the time the claimed invention was made, and does not include knowledge gleaned only from the applicant's disclosure, such a reconstruction is proper. See In re McLaughlin, 443 F.2d 1392, 170 USPQ 209 (CCPA 1971). Claims 8 and 15 recite similar features as claim 1. Accordingly, the responses to arguments addressed with respect to claim 1 above are also applicable to claims 8 and 15. Dependent claims 2-7, 9-14, and 16-20 are rejected at least for reasons described above regarding independent claims 1, 8 and 15, and by virtue of their respective dependencies upon independent claims 1, 8 and 15. Therefore, the rejections of claims 1-20 are maintained. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-4, 6-11, 13-18 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Gawande et al. (US 20230033727 A1) in view of Chang et al. (US 20220365740 A1) and further in view of Hyndman et al. (US 20120229446 A1). Regarding claim 1, Gawande teaches a communication system (a shown in FIG. 1) that initiates and maintains a screenshare comprising live audio-video (AV) content from one or more participant computing devices in a live share session, the system comprising: a receiver configured to receive a live audio-video content feed from a first participating computing device (a receiver in a server receives a real time video/voice of a first user of a first user device, e.g., 101B – see FIGs. 1 and 3, 0026, 0028, 0040, 0041, 0046); a processor configured to: initiate, by a live share creator, a live share session that includes the live audio-video content feed from the first participating computing device and a widget (e.g., initiate a live communication session, by the first user, that includes the real-time video/voice from the first user device and a visual element as a shared screen or shared content window – see FIGs. 1, 3 & 6, 0026, 0039-0041, 0043, 0060); generate a screenshare, by a screenshare renderer, containing the audio-video content feed from the first participating computing device and the widget (generate a screenshare, via application, comprising the real-time video/voice of the first user and the visual element – see FIGs 1-6, 0027, 0028, 0039-0041, 0043, 0060); detect, by a widget state monitor, in real time any interaction with the widget; and when an interaction with the widget occurs, record, by the widget state monitor, details of the interaction including a current real-time state of the widget (detect any associations with the shared content of the visual element by a research assistant application in real time; and when an association is detected, e.g., detection of a topic, record details of the association in terms of generating the research information with a current real-time state of the visual element in terms of presentation of the current shared content - see FIGs. 1and 6, 0041-0043, 0047-0049, 0056, 0060); a transmitter configured to send the screenshare, including the live audio-video content feed from the first participating computing device, and a synchronous widget to a second participating computing device (a transmitter in the server delivers to a second user device participating in the live communication session the screenshare comprising the real-time video/voice from the first user device and a visual element of the screenshare comprising the presentation of the research information in parallel with the current shared content – see FIGs. 1, 3 & 6, 0040-0042 and 0060), wherein the synchronous widget includes the details of the interaction, including the current real-time state of the widget” (the visual element of the screenshare includes the presentation of the research information in parallel with the current shared content - See FIGs. 1 & 6, 0040-0042 0046, 0056, 0060), such that when the second participating computing device joins the live share session the widget is presented in said current real-time state independent of changes to the audio-video content in the live share session (as the second user joins the live communication session, the visual element displays the shared content and the researched information in the real time state independent of changes to the image/voice of the first user in the live communication session, e.g., due to muting or enabling/disabling camera/webcam for a moment from the first user device, since the image/voice of the first user separately presents in a reduced size window - See FIGs 1, 3 and 6, 0036, 0040, 0046, 0060). Gawande lacks to teach the features “wherein the synchronous widget is maintained persistently regardless of any interruption in the live share session”. Chang discloses that the content in the shared- content session continues to be shared with participants even if a user that initiated the shared content session disconnects from the shared content session (e.g., leaves the shared content session) in a video conference. See 0462. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Gawande by including that the synchronous widget is maintained persistently regardless of any interruption in the live share session as disclosed by Chang in order to improve the efficiency in the share content session. Both fail to teach that when the live share session is terminated the synchronous widget is maintained persistently in a subsequent live share session. Hyndman teaches that if a virtual meeting is ended, a virtual environment template containing information and content related to a topic of the virtual meeting are saved and used for a subsequent virtual meeting so that the meeting can be continue with all information and content in place. See abstract and 0028. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teaching of Gawande and Chang by having the synchronous widget to be maintained persistently in a subsequent live share session when the live share session is terminated as taught or suggested by Hyndman to order to save time consuming for providing the same materials related to a topic. Regarding claim 2, Gawande teaches that a widget generating tool configured to generate the widget, wherein the widget is configured to interact with the first computing device or the second computing device (other applications 105 such as a slide presentation application, a document editor application, a document display application, a graphical editing application, a spreadsheet, a multimedia application, a gaming application…etc. configured to generate the shared content of the visual element, and the visual element is associated by the first user device. See 0027, 0060). Regarding claim 3, Gawande teaches that a screenshare renderer configured to generate the screenshare based on the synchronous widget and video content contained in the live audio-video content feed from the first participating computing device (generate the screenshare in the live communication session using the visual element of the screenshare for presentation of the shared content in parallel with the research information and video cofrom the first user device – see FIGs 1-6, 0027, 0028, 0040, 0041, 0047, 0060). Regarding claim 4, Gawande in combination with Chang further teaches that wherein the screenshare renderer is configured to communicate and interact with the transmitter to: assemble the synchronous widget and the video content into a video screenshare (the display communicates with the server to combine the data of the live information feed and the video content into a video screenshare 500 as shown in FIG. 6 – see FIG. 6, 0028, 0060), packetize the video screenshare; and send the packetized video screenshare to the second participating computing device (transmits data packets for the shared-content session — see Chang: 0279). Regarding claim 6, Gawande in view of Chang teaches that wherein the interruption in the live share session comprises the first computing device disconnecting from the live share session and reconnecting at a later time (a participant of the shared content session can leave and rejoin the share content session at a later time — see Chang: 0607, 0641). Regarding claim 7, Gawande in combination with Chang teaches that wherein the first computing device is provided with the synchronous widget when reconnecting at a later time (provide the live information feed element to the first user device and when rejoining the share content session at a later time – see FIG. 6 and 0060; Chang: 0607, 0641). Regarding claims 8 and 15, see rejection of claim 1. Regarding claims 9 and 16, see rejection of claim 2. Regarding claims 10 and 17, see rejection of claim 3. Regarding claims 11 and 18, see rejection of claim 4. Regarding claims 13 and 20, see rejection of claim 6. Regarding claim 14, see rejection of claim 7. Claims 5, 12 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Gawande et al. (US 20230033727 A1) in view of Chang et al. (US 20220365740 A1) and further in view of Hyndman et al. (US 20120229446 A1) and further in view of Ren et al. (US 20230147216 A1). Regarding claim 5, Gawande lacks to teach the features as claimed. However, Ren teaches translating video/audio from a first format provided by a first device into second format compatible with a second user device in a video conferencing platform. See 0022, 0052, 0056, 0075, 0076. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Gawande, Chang and Hyndman by having that the screenshare renderer includes a translator configured to translate the video content or audio content contained in the live audio content feed from a first format or language to a second format or language used by the second participating computing device as discloses or suggested by Ren in order to increase effectiveness of presenting content to user appropriately. Regarding claims 12 and 19, see rejection of claim 5. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to NGOC K VU whose telephone number is (571)272-7306. The examiner can normally be reached Monday & Thursday: 9AM-6PM EST; Tuesday, Wednesday & Friday: out of office. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, NATHAN FLYNN can be reached at 571-272-1915. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /NGOC K VU/Primary Examiner, Art Unit 2421
Read full office action

Prosecution Timeline

Feb 28, 2023
Application Filed
Aug 09, 2024
Non-Final Rejection — §103
Nov 14, 2024
Response Filed
Feb 05, 2025
Final Rejection — §103
Aug 08, 2025
Request for Continued Examination
Aug 13, 2025
Response after Non-Final Action
Sep 13, 2025
Non-Final Rejection — §103
Jan 06, 2026
Response Filed
Mar 07, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12581153
SERVER SYSTEM, APPLICATION PROGRAM DISTRIBUTION SERVER, VIEWING TERMINAL, CONTENT VIEWING METHOD, APPLICATION PROGRAM, DISTRIBUTION METHOD, AND APPLICATION PROGRAM DISTRIBUTION METHOD
2y 5m to grant Granted Mar 17, 2026
Patent 12549778
ANGLE-OF-VIEW SWITCHING METHOD, APPARATUS AND SYSTEM FOR FREE ANGLE-OF-VIEW VIDEO, AND DEVICE AND MEDIUM
2y 5m to grant Granted Feb 10, 2026
Patent 12532044
METHOD AND SYSTEM FOR ACCESSING USER RELEVANT MULTIMEDIA CONTENT WITHIN MULTIMEDIA FILES
2y 5m to grant Granted Jan 20, 2026
Patent 12501095
ADAPTIVE PLAYBACK METHOD AND DEVICE FOR VIDEO
2y 5m to grant Granted Dec 16, 2025
Patent 12464173
VIRTUAL GIFT DISPLAY
2y 5m to grant Granted Nov 04, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
72%
Grant Probability
85%
With Interview (+13.9%)
3y 11m
Median Time to Grant
High
PTA Risk
Based on 253 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in for Full Analysis

Enter your email to receive a magic link. No password needed.

Free tier: 3 strategy analyses per month