DETAILED ACTION
Double Patenting
1. The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
2. Claims 1, 2, 4-6, 8-13, 15-17 and 19 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 4, 11 and 15 of co-pending Application No. 18/661,314 in view of Afrasiabi, U.S. Patent Application Publication No. 2022/0286625 (Afrasiabi).
Regarding claim 1 of the instant application, claim 1 of co-pending Application No. 18/661,314 discloses a method for providing on-video content during a video presentation by at least one user, the method comprising, during execution of one or more applications by an electronic device associated with at least a display unit and connected to a capture element having a field of view including the at least one user:
generating in a screen area of the display unit a first image layer comprising content associated with at least one of the one or more applications;
generating in the screen area of the display unit a second image layer comprising a content window, wherein the second image layer at least partially overlaps the first image layer;
generating a location of the at least content window within the screen area, the location within the screen area being aligned relative to a position of the capture element connected to the electronic device;
wherein content displayed in the content window is provided in accordance with the at least one of the one or more applications, and
wherein the location of the content window within the screen area does not alter the field of view of images captured by the capture element.
Further regarding claim 1 of the instant application, claim 1 of co-pending Application No. 18/661,314 does not teach the content window is partially transparent. All the same, Afrasiabi discloses that the content window is partially transparent (from paragraph 0053, see transparent). Therefore, it would have been obvious to one of ordinary skill in the art to modify claim 1 of co-pending Application No. 18/661,314 wherein the content window is partially transparent as taught by Afrasiabi. This modification would have improved the experience by ensuring the overlay does not significantly block the background image of the video feed as suggested by Afrasiabi.
Regarding claim 2 of the instant application, claim 1 of co-pending Application No. 18/661,314 as modified by Afrasiabi discloses wherein the location and/or orientation of the at least partially transparent content window within the screen area is automatically generated along a determined line of sight (from paragraph 0052 of Afrasiabi, see line of sight) between the capture element and the at least one user.
Regarding claim 4 of the instant application, claim 4 of co-pending Application No. 18/661,314 discloses automatically ascertaining a location of the at least one user relative to the capture element.
Regarding claim 5 of the instant application, claim 1 of co-pending Application No. 18/661,314 as modified by Afrasiabi discloses the location and/or orientation of the at least partially transparent content window within the screen area is dynamically adjustable based on user input from the at least one user (from paragraph 0051 of Afrasiabi, see Additionally or alternatively, the user may customize and save placement of the fields 220 so as to arrange script and boxes 220 in a desired location on the digital overlay screen 112 for display therein when an event occurs at a later time).
Regarding claim 6 of the instant application, claim 1 of co-pending Application No. 18/661,314 as modified by Afrasiabi discloses the at least partially transparent content window is fixed within the screen area at the location and/or orientation based on user input from the at least one user (from paragraph 0051 of Afrasiabi, see Additionally or alternatively, the user may customize and save placement of the fields 220 so as to arrange script and boxes 220 in a desired location on the digital overlay screen 112 for display therein when an event occurs at a later time).
Regarding claim 8 of the instant application, claim 1 of co-pending Application No. 18/661,314 as modified by Afrasiabi discloses the content is displayed in the at least partially transparent content window according to one or more parameters set via user input from the at least one user (from paragraph 0049, see Moreover, different fields or boxes 220 may be configured with different parameters).
Regarding claim 9 of the instant application, claim 11 of co-pending Application No. 18/661,314 discloses a system for providing on-video content during a video presentation by at least one user, the system comprising:
an electronic device comprising a processor functionally linked to at least a display unit and connected to a capture element having a field of view including the at least one user;
wherein the processor is configured, during execution of one or more applications via the electronic device, to:
generate in a screen area of the display unit a first image layer comprising content associated with at least one of the one or more applications;
generate in the screen area of the display unit a second image layer comprising a content window, wherein the second image layer at least partially overlaps the first image layer;
generate a location of the content window within the screen area, the location within the screen area being aligned relative to a position of the capture element connected to the electronic device;
wherein content displayed in the content window is provided in accordance with the at least one or more applications, and
wherein the location of the content window within the screen area does not alter the field of view of images captured by the capture element.
Further regarding claim 9 of the instant application, claim 11 of co-pending Application No. 18/661,314 does not teach the content window is partially transparent. All the same, Afrasiabi discloses that the content window is partially transparent (from paragraph 0053, see transparent). Therefore, it would have been obvious to one of ordinary skill in the art to modify claim 1 of co-pending Application No. 18/661,314 wherein the content window is partially transparent as taught by Afrasiabi. This modification would have improved the experience by ensuring the overlay does not significantly block the background image of the video feed as suggested by Afrasiabi.
Regarding claim 10, the combination of claim 11 of co-pending Application No. 18/661,314 and Afrasiabi discloses the display unit (from Figure 1, see 102) and the capture element (from Figure 1, see 102) are integrated into the electronic device.
Regarding claim 11, the combination of claim 11 of co-pending Application No. 18/661,314 and Afrasiabi discloses the at least one of the one or more applications comprises a web conferencing platform (from paragraph 0003, see Zoom Microsoft Teams, BlueJeans, FaceTime, Skype, Webex Meetings, and GoToMeeting).
Regarding claim 12, the combination of claim 11 of co-pending Application No. 18/661,314 and Afrasiabi the system of claim 11, wherein the second image layer is generated via execution of an application (from Figure 1, see 101) of the one or more applications separate from the web conferencing platform (from Figure 1, see 100).
Regarding claim 13 of the instant application, claim 11 of co-pending Application No. 18/661,314 as modified by Afrasiabi discloses wherein the location and/or orientation of the at least partially transparent content window within the screen area is automatically generated along a determined line of sight (from paragraph 0052 of Afrasiabi, see line of sight) between the capture element and the at least one user.
Regarding claim 15 of the instant application, claim 15 of co-pending Application No. 18/661,314 the processor is further configured to automatically ascertain a location of the at least one user relative to the capture element.
Regarding claim 16, the combination of claim 11 of co-pending Application No. 18/661,314 as modified by Afrasiabi discloses the location and/or orientation of the at least partially transparent content window within the screen area is dynamically adjustable based on user input from the at least one user (from paragraph 0051, see Additionally or alternatively, the user may customize and save placement of the fields 220 so as to arrange script and boxes 220 in a desired location on the digital overlay screen 112 for display therein when an event occurs at a later time).
Regarding claim 17, the combination of claim 11 of co-pending Application No. 18/661,314 and Afrasiabi discloses the at least partially transparent content window is fixed within the screen area at the location and/or orientation based on user input from the at least one user (from paragraph 0051, see Additionally or alternatively, the user may customize and save placement of the fields 220 so as to arrange script and boxes 220 in a desired location on the digital overlay screen 112 for display therein when an event occurs at a later time).
Regarding claim 19, the combination of claim 11 of co-pending Application No. 18/661,314 and Afrasiabi discloses the content is displayed in the at least partially transparent content window according to one or more parameters set via user input from the at least one user (from paragraph 0049, see Moreover, different fields or boxes 220 may be configured with different parameters).
3. Claims 7 and 19 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1 and 11 of co-pending Application No. 18/661,314 combined with Afrasiabi in further view of Revach et al, U.S. Patent No. 11,048,372 (hereinafter Revach).
Regarding claim 7 of the instant application, the combination of claim 1 of co-pending Application No. 18/661,314 and Afrasiabi does not teach the at least partially transparent content window of the second image layer is generated with a level of transparency set according to input from the at least one user. All the same, Revach discloses this feature (from column 9, see The degree of transparency applied to the second application 515 may be configurable by user 160 according to their preferences). Therefore, it would have been obvious to one of ordinary skill in the art to further modify the combination of claim 1 of co-pending Application No. 18/661,314 and Afrasiabi wherein the level of transparency as taught by Revach. This modification would have improved the system’s flexibility by accounting for different colored data displays as suggested by Revach.
Regarding claim 18 of the instant application, the combination of claim 11 of co-pending Application No. 18/661,314 and Afrasiabi does not teach the at least partially transparent content window of the second image layer is generated with a level of transparency set according to input from the at least one user. All the same, Revach discloses this feature (from column 9, see The degree of transparency applied to the second application 515 may be configurable by user 160 according to their preferences). Therefore, it would have been obvious to one of ordinary skill in the art to further modify the combination of claim 11 of co-pending Application No. 18/661,314 and Afrasiabi wherein the level of transparency as taught by Revach. This modification would have improved the system’s flexibility by accounting for different colored data displays as suggested by Revach.
4. Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over the combination of claim 11 of co-pending Application No. 18/661,314 and Afrasiabi in further view of Huggar et al, U.S. Patent No. 12,028,389 (hereinafter Huggar).
Regarding claim 14, the combination of claim 11 of co-pending Application No. 18/661,314 and Afrasiabi does not explicitly teach automatically ascertaining the position of the capture element relative to the screen area. All the same, Huggar discloses automatically ascertaining the position of the capture element relative to the screen area (from abstract, see identify a location of a camera with respect to the display). Therefore, it would have been obvious to one of ordinary skill in the art to further modify the combination of claim 11 of co-pending Application No. 18/661,314 and Afrasiabi with automatically ascertaining the position of the capture element relative to the screen area as taught by Huggar. This modification would have improved flexibility by allowing the camera to be placed in different locations as suggested by Huggar.
This is a provisional nonstatutory double patent rejection because the patentably indistinct claims have not in fact been patented.
Claim Rejections - 35 USC § 102
5. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
6. Claims 1, 2, 5, 6, 8-13, 16, 17 and 19 are rejected under 35 U.S.C. § 102(a)(2) as being anticipated by Afrasiabi.
Regarding claim 1, Afrasiabi discloses a method for providing on-video content during a video presentation by at least one user, the method comprising, during execution of one or more applications by an electronic device associated with at least a display unit (from Figure 1, see 102) and connected to a capture element (from Figure 1, see 103) having a field of view including the at least one user:
generating in a screen area of the display unit a first image layer comprising content associated with at least one of the one or more applications (from Figure 7, see 120);
generating in the screen area of the display unit a second image layer comprising an at least partially transparent content window, wherein the second image layer at least partially overlaps the first image layer (from Figure 7, see 220);
generating a location of the at least partially transparent content window within the screen area, the location within the screen area being aligned relative to a position of the capture element connected to the electronic device (from paragraph 0052, see a script within a field 220 may be displayed in a position near the presenter’s camera 103),
wherein content displayed in the at least partially transparent content window is provided in accordance with the at least one of the one or more applications (from Figure 7, see 240), and
wherein the location of the at least partially transparent content window within the screen area does not alter the field of view (from paragraph 0052, see field of view) of images captured by the capture element.
Regarding claim 2, Afrasiabi discloses wherein the location and/or orientation of the at least partially transparent content window within the screen area is automatically generated along a determined line of sight (from paragraph 0052, see line of sight) between the capture element and the at least one user.
Regarding claim 5, Afrasiabi discloses the location and/or orientation of the at least partially transparent content window within the screen area is dynamically adjustable based on user input from the at least one user (from paragraph 0051, see Additionally or alternatively, the user may customize and save placement of the fields 220 so as to arrange script and boxes 220 in a desired location on the digital overlay screen 112 for display therein when an event occurs at a later time).
Regarding claim 6, Afrasiabi discloses the at least partially transparent content window is fixed within the screen area at the location and/or orientation based on user input from the at least one user (from paragraph 0051, see Additionally or alternatively, the user may customize and save placement of the fields 220 so as to arrange script and boxes 220 in a desired location on the digital overlay screen 112 for display therein when an event occurs at a later time).
Regarding claim 8, Afrasiabi discloses the content is displayed in the at least partially transparent content window according to one or more parameters set via user input from the at least one user (from paragraph 0049, see Moreover, different fields or boxes 220 may be configured with different parameters).
Regarding claim 9, Afrasiabi a system for providing on-video content during a video presentation by at least one user, the system comprising:
an electronic device (from Figure 1, see 110) comprising a processor (from Figure 1, see 109) functionally linked to at least a display unit (from Figure 1, see 102) and connected to a capture element (from Figure 1, see 103) having a field of view including the at least one user;
wherein the processor is configured, during execution of one or more applications via the electronic device, to:
generate in a screen area of the display unit a first image layer comprising content associated with at least one of the one or more applications (from Figure 7, see 120);
generate in the screen area of the display unit a second image layer comprising an at least partially transparent content window, wherein the second image layer at least partially overlaps the first image layer (from Figure 7, see 220);
generate a location of the at least partially transparent content window within the screen area, the location within the screen area being aligned relative to a position of the capture element connected to the electronic device (from paragraph 0052, see a script within a field 220 may be displayed in a position near the presenter’s camera 103);
wherein content displayed in the at least partially transparent content window is provided in accordance with the at least one or more applications (from Figure 7, see 240), and
wherein the location of the at least partially transparent content window within the screen area does not alter the field of view (from paragraph 0052, see field of view) of images captured by the capture element.
Regarding claim 10, Afrasiabi discloses the display unit (from Figure 1, see 102) and the capture element (from Figure 1, see 102) are integrated into the electronic device.
Regarding claim 11, Afrasiabi discloses the at least one of the one or more applications comprises a web conferencing platform (from paragraph 0003, see Zoom Microsoft Teams, BlueJeans, FaceTime, Skype, Webex Meetings, and GoToMeeting).
Regarding claim 12, Afrasiabi the system of claim 11, wherein the second image layer is generated via execution of an application (from Figure 1, see 101) of the one or more applications separate from the web conferencing platform (from Figure 1, see 100).
Claim 13 is rejected for the same reasons as claim 2.
Claim 16 is rejected for the same reasons as claim 5.
Claim 17 is rejected for the same reasons as claim 6.
Claim 19 is rejected for the same reasons as claim 8.
Claim Rejections - 35 USC § 103
7. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
8. Claims 7 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Afrasiabi in view of Revach.
Regarding claim 7, Afrasiabi does not teach the at least partially transparent content window of the second image layer is generated with a level of transparency set according to input from the at least one user. All the same, Revach discloses this feature (from column 9, see The degree of transparency applied to the second application 515 may be configurable by user 160 according to their preferences). Therefore, it would have been obvious to one of ordinary skill in the art to modify Afrasiabi wherein the level of transparency as taught by Revach. This modification would have improved the system’s flexibility by accounting for different colored data displays as suggested by Revach.
Claim 18 is rejected for the same reasons as claim 7.
9. Claims 4 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Afrasiabi in view of Cossar et al, U.S. Patent No. 12,052,299 (hereinafter Cossar).
Regarding claim 4, Afrasiabi does not teach automatically ascertaining a location of the at least one user relative to the capture element. All the same, Cossar discloses automatically ascertaining a location of the at least one user relative to the capture element (from Figure 4B, see Lens Too). Therefore, it would have been obvious to one of ordinary skill in the art to modify Afrasiabi with automatically ascertaining a location of the at least one user relative to the capture element as taught by Cossar. This modification would have made the experience more effective by providing proper framing as suggested by Cossar.
Claim 15 is rejected for the same reasons as claim 4.
10. Claims 3 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Afrasiabi in view of Huggar.
Regarding claim 3, Afrasiabi does not explicitly teach automatically ascertaining the position of the capture element relative to the screen area. All the same, Huggar discloses automatically ascertaining the position of the capture element relative to the screen area (from abstract, see identify a location of a camera with respect to the display). Therefore, it would have been obvious to one of ordinary skill in the art to modify Afrasiabi with automatically ascertaining the position of the capture element relative to the screen area as taught by Huggar. This modification would have improved flexibility by allowing the camera to be placed in different locations as suggested by Huggar.
Claim 14 is rejected for the same reasons as claim 3.
Response to Arguments
11. Applicant’s arguments have been considered but are deemed to be moot in view of the new grounds of rejection.
Conclusion
12. Applicant’s amendment necessitated the new ground(s) of rejection presented in this Office action. THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
13. Any inquiry concerning this communication or earlier communications from the examiner should be directed to OLISA ANWAH whose telephone number is 571-272-7533. The examiner can normally be reached Monday to Friday from 8.30 AM to 6 PM.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Carolyn Edwards can be reached on 571-270-7136. The fax phone numbers for the organization where this application or proceeding is assigned are 571-273-8300 for regular communications and 571-273-8300 for After Final communications.
Any inquiry of a general nature or relating to the status of this application or proceeding should be directed to the receptionist whose telephone number is 571-272-2600.
Olisa Anwah
Patent Examiner
December 14, 2025
/OLISA ANWAH/Primary Examiner, Art Unit 2692