DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The references listed in the Information Disclosure Statement filed on January 06, 2025 and December 29, 2025 has been considered by the examiner (see attached PTO-1449 form).
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp.
Claims 1-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over Claims 1-13 of U.S. Patent No. 11,457,273. Although the claims at issue are not identical, they are not patentably distinct from each other because instant claims 1-20 are anticipated by the conflicting patented claims 1-13 as shown in the table below. The difference between the instant examined claim and the conflicting patented claims is that the conflicting patented claims are narrower in scope and falls within the scope of the examined claim. Thus, the species or sub-genus claimed in the conflicting patent anticipates the examined claimed genus. Therefore, a patent to the examined claim genus would improperly extend the right to exclude granted by a patent to the species or sub-genus should the genus issue as a patent after the species or sub-genus. See MPEP § 804 (II)(B)(1).
Instant App 16/959,477
U.S. Patent No. 11,457,273
Claims 1, 11, 2, 12, 4, 14
Claims 1, 6
Claim 3, 13
Claims 2, 7
Claim 5, 15
Claims 4, 9
Claim 6, 16
Claims 3, 8
Claim 7, 17
Claim 4
Claim 8, 18
Claims 5, 10
Claim 9, 19
Claims 11, 12
Claim 10, 20
Claim 13
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-9, 11-19 are rejected under 35 U.S.C. 103 as being unpatentable over Greene (U.S. Pub. No. 2023/0039717) in view of Zhang et al. (U.S. Pub. No. 2017/0185871).
Regarding claim 1, Greene discloses a device (see fig. 1 (content receiver 202)) comprising:
a display (see paragraph 0033; media presentation device 204 having the display 206);
a memory storing one or more instructions (see paragraph 0035; non-transitory computer readable memory (NTCRM) 308); and
at least one processor configured to execute the one or more instructions stored in the memory to cause the device to (see paragraph 0035; control circuitry 302, control circuitry 302, e.g., by a general purpose central processing unit (CPU), or a specialized image processing unit (IPU)):
obtain a first video (see paragraphs 0030, 0037; The content receiver 202 to receive the media content for presentation via the media presentation device 204. content receiver 202 receives media content from the satellite receiving antenna 210A);
compare an aspect ratio of the first video with an aspect ratio of a display area of the display (see paragraphs 000005-0007 and fig. 5 (steps 504-508); automatically resizes video images based on the content being displayed on the TV screen);
based on the aspect ratio of the first video being different from the aspect ratio of the display area, obtain an expanded video corresponding to all frames of the first video (see paragraph 0007, fig. 5 (steps 508-510), fig. 8 (steps 806-810); media content need adjustment, adapt output of media content, the expanded video having an aspect ratio corresponding to the aspect ratio of the display area (see paragraph 0007, fig. 8 (step 810), output the modified video frame to reduce the amount of blank space on the visual display), and
display the expanded video in the display area (see fig. 5 (step 512 – display media content to user), fig. 8 (step 804 – display media content on the visual display)).
However, Greene is silent as to AI models or applying a reference frame to an AI model and wherein the trained AI model is generated by using the reference frame and at least one related frame associated with the reference frame as training data.
Zhang et al. teaches neural networks trained for image processing, including aspect ratio adjustment (i.e., by applying a reference frame comprised in the first video to a trained artificial intelligence (AI) model) (see paragraphs 0008-0009, 0045; a neural network trained by inputting a set of raw data images and a correlating set of desired quality output images. The neural network may be configured for being downloaded to a mobile imaging device. The at least one image quality attribute may include image size, aspect ratio. Fig. 6 shows a CNN architecture that outputs a resized RGB patch), neural-network-based resizing to adjust aspect ratio (see paragraph 0009),
wherein the trained AI model is generated by using the reference frame and at least one related frame associated with the reference frame as training data (see paragraphs 0008-0011, 0026-0032, fig. 5; the neural network trained by inputting a set of raw data images and a correlating set of desired quality output images. Trained by inputting a set of raw data images and a correlating set of desired quality output images. Deep CNN training).
It would have been obvious to a skilled artisan before the effective filing date of the claimed invention to modify the system of Greene with the teachings of Zhang et al., the motivation being to improve resizing, aspect-ration correction and image quality.
Regarding claim 11, claim 11 is rejected for the same reason set forth in the rejection of claim 1.
Regarding claims 2 and 12, Greene and Zhang et al. discloses everything claimed as applied above (see claims 1 and 11). Greene discloses wherein the at least one processor is further configured to execute the one or more instructions to cause the device to identify a letterbox to be displayed in the display area based on the aspect ratio of the first video being different from the aspect ratio of the display area (see paragraphs 0005-0007, fig. 5 (step 504-508), determine unused areas of extracted frame; fig. 8 (step 806, 808)).
Regarding claims 3 and 13, Greene and Zhang et al. discloses everything claimed as applied above (see claims 1 and 11). Greene discloses wherein the at least one processor is further configured to execute the one or more instructions to cause the device to:
extract frames comprised in the first video (see fig. 5 (step 504, 514), extract video frame).
Zhang et al. discloses generate the training data to be used to generate the trained Al model based on the extracted frames (see paragraphs 0008, 0010-0011, 0026-0032, fig. 5 (500, 512)); and
obtain the expanded video by updating the trained Al model by inputting the training data to the trained Al model (see paragraphs 0008-0011, 0013-0014 and fig. 5 (500)).
Regarding claims 4 and 14, Greene and Zhang et al. discloses everything claimed as applied above (see claims 1 and 11). Zhang et al. discloses wherein the at least one processor is further configured to execute the one or more instructions to cause the device to generate the trained Al model by using the reference frame and a resized frame obtained by resizing the reference frame as the training data (see paragraphs 0008, 0010-0011, 0045, fig. 5 (500, 504, 512, 514, 520, 525), fig. 4 (400))).
Regarding claims 5 and 15, Greene and Zhang et al. discloses everything claimed as applied above (see claims 1 and 11). Greene discloses wherein the at least one processor is further configured to execute the one or more instructions to cause the device to:
extract the reference frame, at least one previous frame, and at least one next frame comprised in the first video (see paragraph 0007, fig. 5 (504, 514); extract video frame); and
Zhang et al. discloses generate the trained Al model by using the reference frame, the at least one previous frame, and the at least one next frame as the training data (see paragraphs 0008, 0010-0011, 0026-0032, 0038, fig. 5 (500)).
Regarding claims 6 and 16, Greene and Zhang et al. discloses everything claimed as applied above (see claims 1 and 11). Greene discloses wherein the device further comprises a communicate interface (see paragraphs 0034-0035, 0030-0031, fig. 2 (network interface)) configured to transmit the first video to a server (see paragraph 0022, 0030-0031 and fig. 1).
Zhang et al. discloses sending image data to a server for training (see paragraphs 0013-0014 and fig. 1);
receive, from the server, the trained Al model generated by the server using the first video (see paragraphs 0011, 0013-0014), and wherein the at least one processor is further configured to execute the one or more instructions to cause the device to obtain the expanded video by inputting at least one frame of the first video to the trained Al model received from the server (see paragraphs 0008-0009, 0010, 0012 and fig. 6).
Regarding claims 7 and 17, Greene and Zhang et al. discloses everything claimed as applied above (see claims 1 and 11). Greene discloses wherein the at least one processor is further configured to execute the one or more instructions to cause the device to generate frames of the expanded video corresponding to the all frames of the first video (see paragraphs 0007, fig. 5 (steps 504-512), step 8 (steps 802-812)).
Zhang et al. discloses by training the trained Al model (see paragraphs 0008, 0010-0011, 0013-0014) by inputting the reference frame, at least one previous frame, and at least one next frame, to the trained Al model (see paragraphs 0008, 0010-0011, 0026-0032, 0032 and fig. 5 (500)).
Regarding claims 8 and 18, Greene and Zhang et al. discloses everything claimed as applied above (see claims 1 and 11). Greene discloses wherein the device further comprises a communicate interface (see paragraphs 0035, 0030-0031, fig. 1, fig. 2; network interface 310), and
wherein the at least one processor is further configured to execute the one or more instructions to cause the device to:
detect at least one of a pattern and a color constituting the reference frame comprised in the first video (see paragraphs 0016-0017, 0007, figs. 6A-6D, 7A-7B);
search for an image related to the detected at least one of the pattern and the color by using the communicate interface (see paragraph 00018, 0062, 0064-0066).
Zhang et al. discloses generate the trained Al model by using the reference frame and the searched image as the training data (see paragraphs 0008, 0010-0011, 0013-0014).
Regarding claims 9 and 19, Greene and Zhang et al. discloses everything claimed as applied above (see claims 1 and 11). Zhang et al. discloses wherein the at least one processor is further configured to execute the one or more instructions to cause the device to:
identify an object comprised in the reference frame of the first video (see paragraphs 0026-0032, 0038);
determine a category of the first video according to the object (see paragraphs 0027, 0038 and fig. 6);
select the trained Al model related to the category of the first video (see paragraphs 0013-0014, 0027).
Claims 10 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Greene and Zhang et al. as applied to claims 9 and 10 above, and further in view of Stern (U.S. Pub. No. 2018/0324472).
Regarding claims 10 and 20, Greene and Zhang et al. discloses everything claimed as applied above (see claims 9 and 10).
However, Greene and Zhang et al. fail to explicitly disclose wherein the category comprises at least one from among a scientific fiction (SF) movie, a documentary, a live performance, a 2D animation, a 3D animation, an augmented reality (AR) video and a hologram video.
Stern discloses wherein the category comprises at least one from among a scientific fiction (SF) movie, a documentary (see fig. 4A, paragraphs 0003, 0013, 0030), a live performance, a 2D animation, a 3D animation, an augmented reality (AR) video and a hologram video.
It would have been obvious to a skilled artisan before the effective filing date of the claimed invention to modify the system of Greene and Zhang et al. with the teachings of Stern, the motivation being to automatically select the most appropriate AI model for a given video genre.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to NNENNA NGOZI EKPO whose telephone number is (571)270-1663. The examiner can normally be reached M-W 10:00am - 6:30pm, TH-F 8:00am - 4:30pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Brian Pendleton can be reached at 571-272-7527. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
NNENNA EKPO
Primary Examiner
Art Unit 2425
/NNENNA N EKPO/Primary Examiner, Art Unit 2425 March 3, 2026.