Prosecution Insights
Last updated: April 19, 2026
Application No. 19/053,962

Preserving Image Quality in Temporally Compressed Video Streams

Non-Final OA §103§DP
Filed
Feb 14, 2025
Examiner
LEE, JIMMY S
Art Unit
2482
Tech Center
2400 — Computer Networks
Assignee
Comcast Cable Communications LLC
OA Round
1 (Non-Final)
56%
Grant Probability
Moderate
1-2
OA Rounds
3y 7m
To Grant
84%
With Interview

Examiner Intelligence

Grants 56% of resolved cases
56%
Career Allow Rate
170 granted / 302 resolved
-1.7% vs TC avg
Strong +28% interview lift
Without
With
+28.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
33 currently pending
Career history
335
Total Applications
across all art units

Statute-Specific Performance

§101
3.2%
-36.8% vs TC avg
§103
71.5%
+31.5% vs TC avg
§102
8.8%
-31.2% vs TC avg
§112
12.8%
-27.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 302 resolved cases

Office Action

§103 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1-20 rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-20 of U.S. Patent No. 12267508. Although the claims at issue are not identical, they are not patentably distinct from each other because is it an obvious variation of the patent indicated above. For example, the language of the instant application relates to a first video stream and a second video stream, whereas the language of U.S. Patent No. 12267508relates to a first encoded video stream and a second encoded video stream. Additionally, the instant application inserts advertisement content as part of the second video stream, versus the claims of U.S. Patent No. 12267508 which relates to inserting advertisement content into the first encoded video stream. The table below present the corresponding limitations in bold: Claims of the instant application Claims of U.S. Patent No. 12267508 1. A method comprising: determining, based on a distance between two intra-coded frames in a first video stream, that the first video stream is to be transcoded into a second video stream; and generating the second video stream by: transcoding each of the intra-coded frames from the first video stream into intra-coded frames in the second video stream, wherein the second video stream comprises the same or greater intra-coded frames as compared to the first video stream; and inserting advertisement content. 1. A method comprising: based on comparing a distance value to a threshold, determining that a first encoded video stream is to be transcoded into a second encoded video stream, wherein the distance value corresponds to distances between intra-coded regions in the first encoded video stream; and generating the second encoded video stream by: inserting an advertisement into the first encoded video stream; and transcoding each of the intra-coded regions from the first encoded video stream into intra-coded regions in the second encoded video stream, wherein the second encoded video stream comprises the same or greater intra-coded regions as compared to the first encoded video stream. 2. The method of claim 1, wherein the method further comprises: decoding at least a portion of the first video stream. 2. The method of claim 1, wherein the method further comprises: decoding at least a portion of the first encoded video stream. 3. The method of claim 1, wherein inserting the advertisement into the second video stream comprises: inserting, at a location in the second video stream indicated by data associated with the second video stream, the advertisement. 3. The method of claim 1, wherein inserting the advertisement into the first encoded video stream comprises: inserting, at a location in the first encoded video stream indicated by data associated with the first encoded video stream, the advertisement. 4. The method of claim 1, further comprising: selecting, from the first video stream and based on the distance, a set of non-intra-coded frames to be intra-coded in the second video stream, wherein the generating the second video stream further comprises transcoding the selected set of non-intra-coded frames into intra-coded frames in the second video stream. 4. The method of claim 1, further comprising: selecting, from the first encoded video stream and based on the distance, a set of non-intra-coded regions to be intra-coded in the second encoded video stream, wherein the generating the second encoded video stream further comprises transcoding the selected set of non-intra-coded regions into intra-coded regions in the second encoded video stream. 5. The method of claim 1, further comprising: selecting, from the first video stream and based on a determination that the first video stream is in a first format type, a first set of predictive-coded frames to be intra-coded in the second video stream, wherein the generating the second video stream further comprises transcoding the selected first set of predictive-coded frames into intra-coded frames in the second video stream. 5. The method of claim 1, further comprising: selecting, from the first encoded video stream and based on a determination that the first encoded video stream is encoded in a first format type, a first set of predictive-coded regions to be intra-coded in the second encoded video stream, wherein the generating the second encoded video stream further comprises transcoding the selected first set of predictive-coded regions into intra-coded regions in the second encoded video stream. 6. The method of claim 1, further comprising: selecting, from the first video stream and based on a determination that the second video stream is to be in a first format type, a first set of predictive-coded frames to be predictive-coded in the second video stream, wherein the generating the second video stream further comprises transcoding the selected first set of predictive-coded frames into predictive-coded frames in the second video stream. 6. The method of claim 1, further comprising: selecting, from the first encoded video stream and based on a determination that the second encoded video stream is to be encoded in a first format type, a first set of predictive-coded regions to be predictive-coded in the second encoded video stream, wherein the generating the second encoded video stream further comprises transcoding the selected first set of predictive-coded regions into predictive-coded regions in the second encoded video stream. 7. The method of claim 1, further comprising: selecting, from the first video stream, a first set of predictive-coded frames, wherein the transcoding the each of the intra-coded frames further comprises applying, to one or more frames of the first set of predictive-coded frames, less spatial compression than is applied to other transcoded frames. 7. The method of claim 1, further comprising: selecting, from the first encoded video stream, a first set of predictive-coded regions, wherein the transcoding each of the intra-coded regions further comprises applying, to one or more regions of the first set of predictive-coded regions, less spatial compression than is applied to other transcoded regions. 8. The method of claim 1, further comprising: determining, from the first video stream, a first set of predictive-coded frames to be bi- predictive-coded in the second video stream and a second set of predictive-coded frames to be predictive-coded in the second video stream, wherein the generating the second video stream further comprises: transcoding the first set of predictive-coded frames into bi-predictive-coded frames in the second video stream; and transcoding the second set of predictive-coded frames into predictive-coded frames in the second video stream. 8. The method of claim 1, further comprising: determining, from the first encoded video stream, a first set of predictive-coded regions to be bi-predictive-coded in the second encoded video stream and a second set of predictive-coded regions to be predictive-coded in the second encoded video stream, wherein the generating the second encoded video stream further comprises: transcoding the first set of predictive-coded regions into bi-predictive-coded regions in the second encoded video stream; and transcoding the second set of predictive-coded regions into predictive-coded regions in the second encoded video stream. 9. An apparatus comprising: one or more processors; and memory storing instructions that, when executed by the one or more processors, configure the apparatus to: determine, based on a distance between two intra-coded frames in a first video stream, that the first video stream is to be transcoded into a second video stream; and generate the second video stream by: transcoding each of the intra-coded frames from the first video stream into intra-coded frames in the second video stream, wherein the second video stream comprises the same or greater intra-coded frames as compared to the first video stream; and inserting advertisement content. 9. An apparatus comprising: one or more processors; and memory storing instructions that, when executed by the one or more processors, configure the apparatus to: based on comparing a distance value to a threshold, determine that a first encoded video stream is to be transcoded into a second encoded video stream, wherein the distance value corresponds to distances between intra-coded regions in the first encoded video stream; and generate the second encoded video stream by: inserting an advertisement into the first encoded video stream; and transcoding each of the intra-coded regions from the first encoded video stream into intra-coded regions in the second encoded video stream, wherein the second encoded video stream comprises the same or greater intra-coded regions as compared to the first encoded video stream. 10. The apparatus of claim 9, wherein the instructions, when executed by the one or more processors, further configure the apparatus to: decode at least a portion of the first video stream. 10. The apparatus of claim 9, wherein the instructions, when executed by the one or more processors, further configure the apparatus to: decode at least a portion of the first encoded video stream. 11. The apparatus of claim 9, wherein the instructions, when executed by the one or more processors, configure the apparatus to insert the advertisement into the second video stream by configuring the apparatus to: insert, at a location in the second video stream indicated by data associated with the second video stream, the advertisement. 11. The apparatus of claim 9, wherein the instructions, when executed by the one or more processors, configure the apparatus to insert the advertisement into the first encoded video stream by configuring the apparatus to: insert, at a location in the first encoded video stream indicated by data associated with the first encoded video stream, the advertisement. 12. The apparatus of claim 9, wherein the instructions, when executed by the one or more processors, further configure the apparatus to: select, from the first video stream and based on the distance, a set of non-intra-coded frames to be intra-coded in the second video stream, wherein the instructions, when executed by the one or more processors, cause the apparatus to generate the second video stream by causing the apparatus to transcode the selected set of non-intra-coded frames into intra-coded frames in the second video stream. 12. The apparatus of claim 9, wherein the instructions, when executed by the one or more processors, further configure the apparatus to: select, from the first encoded video stream and based on the distance, a set of non-intra-coded regions to be intra-coded in the second encoded video stream, wherein the instructions, when executed by the one or more processors, cause the apparatus to generate the second encoded video stream by causing the apparatus to transcode the selected set of non-intra-coded regions into intra-coded regions in the second encoded video stream. 13. The apparatus of claim 9, wherein the instructions, when executed by the one or more processors, further configure the apparatus to: select, from the first video stream and based on a determination that the first video stream is in a first format type, a first set of predictive-coded frames to be intra-coded in the second video stream, wherein the instructions, when executed by the one or more processors, cause the apparatus to generate the second video stream by causing the apparatus to transcode the selected first set of predictive-coded frames into intra-coded frames in the second video stream. 13. The apparatus of claim 9, wherein the instructions, when executed by the one or more processors, further configure the apparatus to: select, from the first encoded video stream and based on a determination that the first encoded video stream is encoded in a first format type, a first set of predictive-coded regions to be intra-coded in the second encoded video stream, wherein the instructions, when executed by the one or more processors, cause the apparatus to generate the second encoded video stream by causing the apparatus to transcode the selected first set of predictive-coded regions into intra-coded regions in the second encoded video stream. 14. The apparatus of claim 9, wherein the instructions, when executed by the one or more processors, further configure the apparatus to: select, from the first video stream and based on a determination that the second video stream is to be in a first format type, a first set of predictive-coded frames to be predictive-coded in the second video stream, wherein the instructions, when executed by the one or more processors, cause the apparatus to generate the second video stream by causing the apparatus to transcode the selected first set of predictive-coded frames into predictive-coded frames in the second video stream. 14. The apparatus of claim 9, wherein the instructions, when executed by the one or more processors, further configure the apparatus to: select, from the first encoded video stream and based on a determination that the second encoded video stream is to be encoded in a first format type, a first set of predictive-coded regions to be predictive-coded in the second encoded video stream, wherein the instructions, when executed by the one or more processors, cause the apparatus to generate the second encoded video stream by causing the apparatus to transcode the selected first set of predictive-coded regions into predictive-coded regions in the second encoded video stream. 15. One or more non-transitory computer-readable media storing instructions that, when executed, cause: determining, based on a distance between two intra-coded frames in a first video stream, that the first video stream is to be transcoded into a second video stream; and generating the second video stream by: transcoding each of the intra-coded frames from the first video stream into intra-coded frames in the second video stream, wherein the second video stream comprises the same or greater intra-coded frames as compared to the first video stream; and inserting advertisement content. 15. One or more non-transitory computer-readable media storing instructions that, when executed, cause: based on comparing a distance value to a threshold, determining that a first encoded video stream is to be transcoded into a second encoded video stream, wherein the distance value corresponds to distances between intra-co done in the first encoded video stream; and generating the second encoded video stream by: inserting an advertisement into the first encoded video stream; and transcoding each of the intra-coded regions from the first encoded video stream into intra-coded regions in the second encoded video stream, wherein the second encoded video stream comprises the same or greater intra-coded regions as compared to the first encoded video stream. 16. The one or more non-transitory computer-readable media of claim 15, wherein the instructions, when executed, further cause: decoding at least a portion of the first video stream. 16. The one or more non-transitory computer-readable media of claim 15, wherein the instructions, when executed, further cause: decoding at least a portion of the first encoded video stream. 17. The one or more non-transitory computer-readable media of claim 15, wherein the instructions, when executed, cause the inserting the advertisement into the second video stream by causing: inserting, at a location in the second video stream indicated by data associated with the second video stream, the advertisement. 17. The one or more non-transitory computer-readable media of claim 15, wherein the instructions, when executed, cause the inserting the advertisement into the first encoded video stream by causing: inserting, at a location in the first encoded video stream indicated by data associated with the first encoded video stream, the advertisement. 18. The one or more non-transitory computer-readable media of claim 15, wherein the instructions, when executed, further cause: selecting, from the first video stream and based on the distance, a set of non-intra-coded frames to be intra-coded in the second video stream, wherein the instructions, when executed, cause the generating the second video stream by causing transcoding the selected set of non- intra-coded frames into intra-coded frames in the second video stream. 18. The one or more non-transitory computer-readable media of claim 15, wherein the instructions, when executed, further cause: selecting, from the first encoded video stream and based on the distance, a set of non-intra-coded regions to be intra-coded in the second encoded video stream, wherein the instructions, when executed, cause the generating the second encoded video stream by causing transcoding the selected set of non-intra-coded regions into intra-coded regions in the second encoded video stream. 19. The one or more non-transitory computer-readable media of claim 15, wherein the instructions, when executed, further cause: selecting, from the first video stream and based on a determination that the first video stream is in a first format type, a first set of predictive-coded frames to be intra-coded in the second video stream, wherein the instructions, when executed, cause the generating the second video stream by causing transcoding the selected first set of predictive-coded frames into intra- coded frames in the second video stream. 19. The one or more non-transitory computer-readable media of claim 15, wherein the instructions, when executed, further cause: selecting, from the first encoded video stream and based on a determination that the first encoded video stream is encoded in a first format type, a first set of predictive-coded regions to be intra-coded in the second encoded video stream, wherein the instructions, when executed, cause the generating the second encoded video stream by causing transcoding the selected first set of predictive-coded regions into intra-coded regions in the second encoded video stream. 20. The one or more non-transitory computer-readable media of claim 15, wherein the instructions, when executed, further cause: selecting, from the first video stream and based on a determination that the second video stream is to be in a first format type, a first set of predictive-coded frames to be predictive-coded in the second video stream, wherein the instructions, when executed, cause the generating the second video stream by causing transcoding the selected first set of predictive-coded frames into predictive-coded frames in the second video stream. 20. The one or more non-transitory computer-readable media of claim 15, wherein the instructions, when executed, further cause: selecting, from the first encoded video stream and based on a determination that the second encoded video stream is to be encoded in a first format type, a first set of predictive-coded regions to be predictive-coded in the second encoded video stream, wherein the instructions, when executed, cause the generating the second encoded video stream by causing transcoding the selected first set of predictive-coded regions into predictive-coded regions in the second encoded video stream. Claims 1,5-9,12-15,18-20 rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1,3-5,7-11,15-18 of U.S. Patent No. 11539963 in view of Ozawa; Kazunori (US 20100100900 A1). Although the claims at issue are not identical, they are not patentably distinct from each other because is it an obvious variation of the patent indicated above. The table below present the corresponding limitations in bold: Claims of the instant application Claims of U.S. Patent No. 11539963 1. A method comprising: determining, based on a distance between two intra-coded frames in a first video stream, that the first video stream is to be transcoded into a second video stream; and generating the second video stream by: transcoding each of the intra-coded frames from the first video stream into intra-coded frames in the second video stream, wherein the second video stream comprises the same or greater intra-coded frames as compared to the first video stream; and inserting advertisement content. 1. A method comprising: receiving, by a computing device, a first encoded video stream; determining, based on a target average distance between intra-coded regions to be encoded in a second encoded video stream, that the first encoded video stream is to be transcoded into the second encoded video stream, wherein a target quantity of intra-coded regions in the second encoded video stream is the same as or greater than a quantity of intra-coded regions in the first encoded video stream; determining each of the intra-coded regions in the first encoded video stream; and generating the second encoded video stream by transcoding the determined intra-coded regions from the first encoded video stream into intra-coded regions in the second encoded video stream. But does not explicitly teach, inserting advertisement content. However, Ozawa teaches additionally, generating the second video stream (¶54 and fig. 1, transcoder 22, depicted in fig. 1, used to “synthesize the contents into one image and then re-encode the synthesized image into one image”) by: inserting advertisement content. (¶54, convert the screen sizes of the contents containing the plurality of images and the “advertisement contents” to synthesize the contents “into one image”) It would have been obvious to one with ordinary skill in the art at the time of the filing date of the claimed invention to combine the compressed video streams of Syed with the transcoder of Ozawa which transcodes advertisements into one image. This allows for displaying of advertisement contents without any remodeling. 5. The method of claim 1, further comprising: selecting, from the first video stream and based on a determination that the first video stream is in a first format type, a first set of predictive-coded frames to be intra-coded in the second video stream, wherein the generating the second video stream further comprises transcoding the selected first set of predictive-coded frames into intra-coded frames in the second video stream. 3. The method of claim 1, further comprising: selecting, from the first encoded video stream and based on a determination that the first encoded video stream is encoded in a first format type, a first set of predictive-coded regions to be intra-coded in the second encoded video stream; and generating the second encoded video stream by further transcoding the selected first set of predictive-coded regions into intra-coded regions in the second encoded video stream. 6. The method of claim 1, further comprising: selecting, from the first video stream and based on a determination that the second video stream is to be in a first format type, a first set of predictive-coded frames to be predictive-coded in the second video stream, wherein the generating the second video stream further comprises transcoding the selected first set of predictive-coded frames into predictive-coded frames in the second video stream. 4. The method of claim 1, further comprising: selecting, from the first encoded video stream and based on a determination that the second encoded video stream is to be encoded in a first format type, a first set of predictive-coded regions to be predictive-coded in the second encoded video stream; and generating the second encoded video stream by further transcoding the selected first set of predictive-coded regions into predictive-coded regions in the second encoded video stream. 7. The method of claim 1, further comprising: selecting, from the first video stream, a first set of predictive-coded frames, wherein the transcoding the each of the intra-coded frames further comprises applying, to one or more frames of the first set of predictive-coded frames, less spatial compression than is applied to other transcoded frames. 5. The method of claim 1, further comprising: selecting, from the first encoded video stream, a first set of predictive-coded regions, wherein the transcoding further comprises applying, to one or more regions of the first set of predictive-coded regions, less spatial compression than is applied to other transcoded regions. 8. The method of claim 1, further comprising: determining, from the first video stream, a first set of predictive-coded frames to be bi- predictive-coded in the second video stream and a second set of predictive-coded frames to be predictive-coded in the second video stream, wherein the generating the second video stream further comprises: transcoding the first set of predictive-coded frames into bi-predictive-coded frames in the second video stream; and transcoding the second set of predictive-coded frames into predictive-coded frames in the second video stream. 7. The method of claim 1, further comprising: determining, from the first encoded video stream, a first set of predictive-coded regions to be bi-predictive-coded in the second encoded video stream and a second set of predictive-coded regions to be predictive-coded in the second encoded video stream; and generating the second encoded video stream further by: transcoding the first set of predictive-coded regions into bi-predictive-coded regions in the second encoded video stream; and transcoding the second set of predictive-coded regions into predictive-coded regions in the second encoded video stream. 9. An apparatus comprising: one, or more processors; and memory storing instructions that, when executed by the one or more processors, configure the apparatus to: determine, based on a distance between two intra-coded frames in a first video stream, that the first video stream is to be transcoded into a second video stream; and generate the second video stream by: transcoding each of the intra-coded frames from the first video stream into intra-coded frames in the second video stream, wherein the second video stream comprises the same or greater intra-coded frames as compared to the first video stream; and inserting advertisement content. 8. A system comprising: a first computing device configured to transmit a first encoded video stream to a second computing device; and the second computing device configured to: receive, from the first computing device, the first encoded video stream; determine, based on a target average distance between intra-coded regions to be encoded in a second encoded video stream, that the first encoded video stream is to be transcoded into the second encoded video stream, wherein a target quantity of intra-coded regions in the second encoded video stream is the same as or greater than a quantity of intra-coded regions in the first encoded video stream; determine each of the intra-coded regions in the first encoded video stream; and generate the second encoded video stream by transcoding the determined intra-coded regions from the first encoded video stream into intra-coded regions in the second encoded video stream. But does not explicitly teach, memory storing instructions that, when executed by the one or more processors inserting advertisement content. However, Ozawa teaches additionally, memory storing instructions (¶34 and fig. 3, “instruction from the terminal 3”) that, when executed by the one or more processors, (¶34 and fig. 2, “conversion by the transcoder 22” performed based on reception of an instruction from terminal 3) configure the apparatus (¶33-34, “conversion apparatus 2 is configured of “transcoder 22” which subjects “decoded images to the reduction (or enlargement) of a screen size and synthesizes the resulting images into one image”) to: generating the second video stream (¶54 and fig. 1, transcoder 22, depicted in fig. 1, used to “synthesize the contents into one image and then re-encode the synthesized image into one image”) by: inserting advertisement content. (¶54, convert the screen sizes of the contents containing the plurality of images and the “advertisement contents” to synthesize the contents “into one image”) It would have been obvious to one with ordinary skill in the art at the time of the filing date of the claimed invention to combine the compressed video streams of Syed with the transcoder of Ozawa which transcodes advertisements into one image. This allows for displaying of advertisement contents without any remodeling. 12. The apparatus of claim 9, wherein the instructions, when executed by the one or more processors, further configure the apparatus to: select, from the first video stream and based on the distance, a set of non-intra-coded frames to be intra-coded in the second video stream, wherein the instructions, when executed by the one or more processors, cause the apparatus to generate the second video stream by causing the apparatus to transcode the selected set of non-intra-coded frames into intra-coded frames in the second video stream. 9. The system of claim 8, wherein the second computing device is further configured to: select, from the first encoded video stream and based on the target average distance, a set of non-intra-coded regions to be intra-coded in the second encoded video stream; and generate the second encoded video stream by further transcoding the selected set of non-intra-coded regions into intra-coded regions in the second encoded video stream. 13. The apparatus of claim 9, wherein the instructions, when executed by the one or more processors, further configure the apparatus to: select, from the first video stream and based on a determination that the first video stream is in a first format type, a first set of predictive-coded frames to be intra-coded in the second video stream, wherein the instructions, when executed by the one or more processors, cause the apparatus to generate the second video stream by causing the apparatus to transcode the selected first set of predictive-coded frames into intra-coded frames in the second video stream. 10. The system of claim 8, wherein the second computing device is further configured to: select, from the first encoded video stream and based on a determination that the first encoded video stream is encoded in a first format type, a first set of predictive-coded regions to be intra-coded in the second encoded video stream; and generate the second encoded video stream by further transcoding the selected first set of predictive-coded regions into intra-coded regions in the second encoded video stream. 14. The apparatus of claim 9, wherein the instructions, when executed by the one or more processors, further configure the apparatus to: select, from the first video stream and based on a determination that the second video stream is to be in a first format type, a first set of predictive-coded frames to be predictive-coded in the second video stream, wherein the instructions, when executed by the one or more processors, cause the apparatus to generate the second video stream by causing the apparatus to transcode the selected first set of predictive-coded frames into predictive-coded frames in the second video stream. 11. The system of claim 8, wherein the second computing device is further configured to: select, from the first encoded video stream and based on a determination that the second encoded video stream is to be encoded in a first format type, a first set of predictive-coded regions to be predictive-coded in the second encoded video stream; and generate the second encoded video stream by further transcoding the selected first set of predictive-coded regions into predictive-coded regions in the second encoded video stream. 15. One or more non-transitory computer-readable media storing instructions that, when executed, cause: determining, based on a distance between two intra-coded frames in a first video stream, that the first video stream is to be transcoded into a second video stream; and generating the second video stream by: transcoding each of the intra-coded frames from the first video stream into intra-coded frames in the second video stream, wherein the second video stream comprises the same or greater intra-coded frames as compared to the first video stream; and inserting advertisement content. 15. A non-transitory, computer-readable medium storing instructions that, when executed, cause: receiving a first encoded video stream; determining, based on a target average distance between intra-coded regions to be encoded in a second encoded video stream, that the first encoded video stream is to be transcoded into the second encoded video stream, wherein a target quantity of intra-coded regions in the second encoded video stream is the same as or greater than a quantity of intra-coded regions in the first encoded video stream; determining each of the intra-coded regions in the first encoded video stream; and generating the second encoded video stream by transcoding the determined intra-coded regions from the first encoded video stream into intra-coded regions in the second encoded video stream. But does not explicitly teach, inserting advertisement content. However, Ozawa teaches additionally, generating the second video stream (¶54 and fig. 1, transcoder 22, depicted in fig. 1, used to “synthesize the contents into one image and then re-encode the synthesized image into one image”) by: inserting advertisement content. (¶54, convert the screen sizes of the contents containing the plurality of images and the “advertisement contents” to synthesize the contents “into one image”) It would have been obvious to one with ordinary skill in the art at the time of the filing date of the claimed invention to combine the compressed video streams of Syed with the transcoder of Ozawa which transcodes advertisements into one image. This allows for displaying of advertisement contents without any remodeling. 18. The one or more non-transitory computer-readable media of claim 15, wherein the instructions, when executed, further cause: selecting, from the first video stream and based on the distance, a set of non-intra-coded frames to be intra-coded in the second video stream, wherein the instructions, when executed, cause the generating the second video stream by causing transcoding the selected set of non- intra-coded frames into intra-coded frames in the second video stream. 16. The non-transitory, computer-readable medium of claim 15, wherein the instructions, when executed, further cause: selecting, from the first encoded video stream and based on the target average distance, a set of non-intra-coded regions to be intra-coded in the second encoded video stream; and generating the second encoded video stream by further transcoding the selected set of non-intra-coded regions into intra-coded regions in the second encoded video stream. 19. The one or more non-transitory computer-readable media of claim 15, wherein the instructions, when executed, further cause: selecting, from the first video stream and based on a determination that the first video stream is in a first format type, a first set of predictive-coded frames to be intra-coded in the second video stream, wherein the instructions, when executed, cause the generating the second video stream by causing transcoding the selected first set of predictive-coded frames into intra- coded frames in the second video stream. 17. The non-transitory, computer-readable medium of claim 15, wherein the instructions, when executed, further cause: selecting, from the first encoded video stream and based on a determination that the first encoded video stream is encoded in a first format type, a first set of predictive-coded regions to be intra-coded in the second encoded video stream; and generating the second encoded video stream by further transcoding the selected first set of predictive-coded regions into intra-coded regions in the second encoded video stream. 20. The one or more non-transitory computer-readable media of claim 15, wherein the instructions, when executed, further cause: selecting, from the first video stream and based on a determination that the second video stream is to be in a first format type, a first set of predictive-coded frames to be predictive-coded in the second video stream, wherein the instructions, when executed, cause the generating the second video stream by causing transcoding the selected first set of predictive-coded frames into predictive-coded frames in the second video stream. 18. The non-transitory, computer-readable medium of claim 15, wherein the instructions, when executed, further cause: selecting, from the first encoded video stream and based on a determination that the second encoded video stream is to be encoded in a first format type, a first set of predictive-coded regions to be predictive-coded in the second encoded video stream; and generating the second encoded video stream by further transcoding the selected first set of predictive-coded regions into predictive-coded regions in the second encoded video stream. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-5,9-13,15-19 rejected under 35 U.S.C. 103 as being unpatentable over Shankarappa; Pruthvish (US 8238420 B1) in view of Ozawa; Kazunori (US 20100100900 A1) Regarding claim 1, Shankarappa teaches, A method (Title, “video content transcoding”) comprising: determining, based on a distance between two intra-coded frames in a first video stream, (6:54-67,7:1-6,3:65-67,4:1-41, Fig. 2 and 6, “first video content is encoded with a first number of i-frames” such as video stream 202 with first number of “i-frames at positions 210 and 220” as depicted in fig. 2) that the first video stream (6:54-67,7:1-6,3:65-67,4:1-41, Fig. 2 and 6, first video content “video stream 202” depicted in fig. 2 with “i-frames at positions 210 and 220” as depicted in fig. 2) is to be transcoded into a second video stream; (6:54-67,7:1-17,4:25-41, fig. 2 and 6, “first video content is then transcoded to second video content” that has the ”second number of i-frames and video frames” such as the video stream 204 as a “modification to the video stream 202 resulting from the insertion of i-frames” as depicted in fig. 2 with more i-frames) and generating the second video stream (6:54-67,7:1-17, process 600 for “transcoding video content” such that “first video content is then transcoded to the second video content”) by: transcoding each of the intra-coded frames (3:65-67,4:1-11, and fig. 2, “i-frames at positions 210 and 220” depicted in fig. 2) from the first video stream (3:65-67,4:1-11, and fig. 2, “video stream 202 includes a first i-frame at a position 210 and a second i-frame at a position 220” depicted in fig. 2) into intra-coded frames in the second video stream, (4:25-41, 7:7-17, and fig. 2, “first video content is then transcoded to second video content” that is a modification to the video stream 202 into “video stream 204” with the “insertion of i-frames at positions 230, 240, and 250” as depicted in fig. 2) wherein the second video stream (3:65-67, 4:1-41,7:7-17, and fig. 2, “video stream 204”) comprises the same or greater intra-coded frames as compared to the first video stream; (3:65-67,4:1-41,7:7-17, and fig. 2, video stream 204 that modifies the video stream 202 with “insertion of i-frames at positions 230, 240, and 250” in addition to the initial “i-frame at a position 210 and a second i-frame at a position 220” as depicted in fig. 2) but does not explicitly teach, inserting advertisement content. However, Ozawa teaches additionally, generating the second video stream (¶54 and fig. 1, transcoder 22, depicted in fig. 1, used to “synthesize the contents into one image and then re-encode the synthesized image into one image”) by: inserting advertisement content. (¶54, convert the screen sizes of the contents containing the plurality of images and the “advertisement contents” to synthesize the contents “into one image”) It would have been obvious to one with ordinary skill in the art at the time of the filing date of the claimed invention to combine the video transcoding of Shankarappa with the transcoder of Ozawa which transcodes advertisements into one image. This allows for displaying of advertisement contents without any remodeling. Regarding claim 2, Shankarappa with Ozawa teaches the limitations of claim 1, Shankarappa teaches additionally, decoding at least a portion of the first video stream. (5:60-67,6:1-12,4:1-11,7:7-17, fig. 2 and 4, encoding system 400 produces full frame 402 produced “by decoding” first video content, such as “video stream 202” depicted in fig. 2, that transcoded) Regarding claim 3, Shankarappa with Ozawa teaches the limitations of claim 1, Ozawa teaches additionally, inserting the advertisement into the second video stream (¶54 and fig. 1, transcoder 22 “convert the screen sizes of the contents containing the plurality of images and the advertisement contents and to synthesize the contents”) comprises: inserting, at a location in the second video stream (¶54 and fig. 1, transcoder 22 used to “synthesize the contents into one image and then re-encode the synthesized image”) indicated by data associated with the second video stream, (¶54 and fig. 1, “at least one of the contents” containing the image and at least one of the advertisement contents”) the advertisement. (¶54 and fig. 1, “contents containing the images and the advertisement contents” that are converted for “screen sizes of the contents” are delivered) Regarding claim 4, Shankarappa with Ozawa teaches the limitations of claim 1, Shankarappa teaches additionally, selecting, from the first video stream (4:25-40 and fig. 2, “video stream 202”) and based on the distance, (4:25-40 and fig. 2, “first number of i-frames” with frequency of “i-frame” as depicted in video stream 202 fig. 2) a set of non-intra-coded frames (4:25-41 and fig. 2, video content “p- or b-frames” as depicted in video stream 202) to be intra-coded in the second video stream, (3:65-67 and 4:1-41, “p- and b-frames located between positions 210 and 220” encoded relative to i-frame at position 210 and “p- and b-frames located after position 220” encoded relative to i-frame at position 220 where potential “i-frames are inserted into video content (e.g., in lieu of p- or b-frames)”) wherein the generating the second video stream (4:25-41 and fig. 2, video content of video stream 202 modified into “video stream 204”) further comprises transcoding the selected set of non-intra-coded frames into intra-coded frames in the second video stream. (4:25-41 and fig. 2, video content of video stream 202 modified into video stream 204 resulting from “insertion of i-frames at positions 230, 240, and 250” that replace “b-frame at position 224”, “p-frame at position 242”, and “b-frame at position 252”, respectively, as depicted in fig. 2) Regarding claim 5, Shankarappa with Ozawa teaches the limitations of claim 1, Shankarappa teaches additionally, selecting, from the first video stream (4:25-41,6:54-67,7:1-17, and fig. 2, “video stream 202” which corresponds to “first video content is encoded with a first number of i-frames”) and based on a determination that the first video stream is in a first format type, (4:25-41,6:54-67,7:1-17, and fig. 2, video stream 202 first video content is encoded with a first number of i-frames “where first video content frames are a first frame size”) a first set of predictive-coded frames (4:25-41 and fig. 2, video content “p- or b-frames” as depicted in video stream 202) to be intra-coded in the second video stream, (4:1-41 and fig. 2, positions of “p- or b-frames” encoded relative to “i-frame at position 210” and “i-frame at position 220” that can be modified) wherein the generating the second video stream (4:25-41,6:54-67,7:1-17, first video content is then transcoded to “second video content”) further comprises transcoding the selected first set of predictive-coded frames into intra-coded frames in the second video stream. (4:25-41,6:54-67,7:1-17, and fig. 2, p- or b-frames at positions that correspond to “modification to the video stream 202 resulting from the insertion of i-frames at positions 230, 240, and 250” replacing “b-frame at position 224”, “p-frame at position 242”, and “b-frame at position 252”, respectively, as depicted in fig. 2 when “first video content is then transcoded to second video content” with “second number of i-frames”) Regarding claim 9, it is the apparatus claim of method claim 1. Shankarappa teaches additionally, An apparatus (7:18-60 and fig. 7, “computing device 700” used to implement the systems and methods depicted in fig. 7) comprising: one or more processors; (7:18-60 and fig. 7, “processor 702” included in computing device 700 as depicted in fig. 7) and memory storing instructions (7:18-60 and fig. 7, “instructions stored in the memory 704” depicted in fig. 7) that, when executed by the one or more processors, (7:18-60, “processor 702 can execute instructions for encoding and decoding video” within the computing device 700) Refer to rejection of claim 1 to teach the additional limitations of claim 9. Regarding claim 10, dependent on claim 9, it is the apparatus claim of method claim 2, dependent on claim 1. Refer to rejection of claim 2 to teach the additional limitations of claim 10. Regarding claim 11, dependent on claim 9, it is the apparatus claim of method claim 3, dependent on claim 1. Refer to rejection of claim 3 to teach the additional limitations of claim 11. Regarding claim 12, dependent on claim 9, it is the apparatus claim of method claim 4, dependent on claim 1. Refer to rejection of claim 4 to teach the additional limitations of claim 12. Regarding claim 13, dependent on claim 9, it is the apparatus claim of method claim 5, dependent on claim 1. Refer to rejection of claim 5 to teach the additional limitations of claim 13. Regarding claim 15, it is the computer-readable media claim of method claim 1. Shankarappa teaches additionally, One or more non-transitory computer-readable media storing instructions that, (7:18-60 and fig. 7, “instructions stored in the memory 704” depicted in fig. 7) when executed, (7:18-60, “processor 702 can execute instructions for encoding and decoding video” within the computing device 700) Refer to rejection of claim 1 to teach the additional limitations of claim 15. Regarding claim 16, dependent on claim 15, it is the computer-readable media claim of method claim 2, dependent on claim 1. Refer to rejection of claim 2 to teach the additional limitations of claim 16. Regarding claim 17, dependent on claim 15, it is the computer-readable media claim of method claim 3, dependent on claim 1. Refer to rejection of claim 3 to teach the additional limitations of claim 17. Regarding claim 18, dependent on claim 15, it is the computer-readable media claim of method claim 4, dependent on claim 1. Refer to rejection of claim 4 to teach the additional limitations of claim 18. Regarding claim 19, dependent on claim 15, it is the computer-readable media claim of method claim 5, dependent on claim 1. Refer to rejection of claim 5 to teach the additional limitations of claim 19. Claim(s) 6,14,20 rejected under 35 U.S.C. 103 as being unpatentable over Shankarappa; Pruthvish (US 8238420 B1) in view of Ozawa; Kazunori (US 20100100900 A1) in view of Coban; Muhammed Z. et al. (US 20100329338 A1) Regarding claim 6, Shankarappa with Ozawa teaches the limitations of claim 1, Shankarappa teaches additionally, selecting, from the first video stream (4:25-41,6:54-67,7:1-6, and fig. 2, “video stream 202” which corresponds to “first video content is encoded with a first number of i-frames”) and based on a determination that the second video stream is to be in a first format type, (6:54-67,7:1-6, and fig. 6, “A second number of i-frames for the first video content” determined based on “playback capabilities” and being a number “greater than the first number of i-frames”) a first set of predictive-coded frames (4:25-41 and fig. 2, video content “p- or b-frames” as depicted in video stream 202) to be coded in the second video stream, (4:25-41 and fig. 2, “video stream 204 illustrates a modification to the video stream 202 resulting from the insertion of i-frames at positions 230, 240, and 250” which replace “b-frame at position 224”, “p-frame at position 242”, and “b-frame at position 252”, respectively, as depicted in fig. 2) But does not explicitly teach, a first set of predictive-coded frames to be predictive-coded in the second video stream, wherein the generating the second video stream further comprises transcoding the selected first set of predictive-coded frames into predictive-coded frames in the second video stream. However, Coban teaches additionally, a first set of predictive-coded frames (¶35-36 and fig. 3, “sequence of one or more B 305(a,b,c,d)” and one or more P frames 306 that follow I coded frame 304a and stop before I-frame 304b as depicted in fig. 3) to be predictive-coded in the second video stream, (¶35-36 and fig. 3, “original B frames 305(a-d) are replaced with transcoded P frames 307(a-d)”) wherein the generating the second video stream (¶36 and fig. 3, transcode 301 “the collection of frames 302 comprising B frames into the collection of frames 303 comprising only I and P frames”) further comprises transcoding the selected first set of predictive-coded frames (¶35-36 and fig. 3, “sequence of one or more B 305(a,b,c,d)” replaced with “transcoded P frames 307(a-d)” and one or more P frames 306 are untouched as depicted in fig. 3) into predictive-coded frames in the second video stream. (¶35-36 and fig. 3, “original B frames 305(a-d) are replaced with transcoded P frames 307(a-d)”) It would have been obvious to one with ordinary skill in the art at the time of the filing date of the claimed invention to combine the video transcoding of Shankarappa with the transcoder of Ozawa with the transcoding of Coban which replaces B frames with P frames so that a group-of-picture frames (GOP) transcodes the frames into a collection of frames comprising only I and P frames. Techniques that transcodes streams to only I and P frames allow for efficient use of battery and processing power for devices like mobile phones and PDAs. Regarding claim 14, dependent on claim 9, it is the apparatus claim of method claim 6, dependent on claim 1. Refer to rejection of claim 6 to teach the additional limitations of claim 14. Regarding claim 20, dependent on claim 15, it is the computer-readable media claim of method claim 6, dependent on claim 1. Refer to rejection of claim 6 to teach the additional limitations of claim 20. Claim(s) 7,8 rejected under 35 U.S.C. 103 as being unpatentable over Shankarappa; Pruthvish (US 8238420 B1) in view of Ozawa; Kazunori (US 20100100900 A1) in view of Lin, Chia-Wen et al. (US 20050169377 A1) Regarding claim 7, Shankarappa with Ozawa teaches the limitations of claim 1, Shankarappa teaches additionally, selecting, from the first video stream, (3:65-67,4:1-41, and fig. 2, “video stream 202” with “p- and b-frames located between positions 210 and 220” relative to i-frame at position 210 and “p- and b-frames located after position 220” relative to i-frame at position 220) a first set of predictive-coded frames, (3:65-67,4:1-41, and fig. 2, “b-frame at position 224”, “p-frame at position 242”, and “b-frame at position 252” as depicted in fig. 2) wherein the transcoding the each of the intra-coded frames further comprises applying, (6:54-67,7:1-17, and fig. 6, “first video content is then transcoded to second video content, where the second video content has the second number of i-frames and video frames of the second frame size (step 612)”) to one or more frames of the first set of predictive-coded frames, (6:54-67,7:1-17,4:25-41, fig. 6 and 2, first video content is then transcoded to second video content, where “b-frame at position 224” is replaced with the i-frame at position 230, “p-frame at position 242” is replaced with the i-frame at position 240, and “b-frame at position 252” is replaced with the i-frame at position 250) less spatial compression than is applied (6:54-67,7:1-17, and fig. 6, first video content is then transcoded to second video content, where the second video content with “the second frame size is smaller than the first frame size” of the first video content) but does not explicitly teach, wherein the transcoding the each of the intra-coded frames further comprises applying, to one or more frames of the first set of predictive-coded frames, less spatial compression than is applied to other transcoded frames. However, Lin teaches additionally, wherein the transcoding the each of the intra-coded frames (¶38-40 and fig. 8, “transcoding scheme” performs the “full-resolution decoding for I” frames and P frames, and the reduced-resolution decoding for B frames) further comprises applying, (¶38-40 and fig. 8, “performing the DCT-MC downscaling”) to one or more frames of the first set of predictive-coded frames, (¶38-40 and fig. 8, perform “DCT-MC downscaling for B frames in the decoder-loop”) less spatial compression than is applied to other transcoded frames. (¶38-40 and fig. 8, full-resolution decoding for I and P frames and “reduced-resolution decoding for B frames”) It would have been obvious to one with ordinary skill in the art at the time of the filing date of the claimed invention to combine the video transcoding of Shankarappa with the transcoder of Ozawa with the video downscaling of Lin which performs reduced resolution decoding for B frames. This can lead to significant computation savings since B-frames occupy a large portion of an I-B-P structured video. Regarding claim 8, Shankarappa with Ozawa teaches the limitations of claim 1, Shankarappa teaches additionally, determining, from the first video stream, (3:65-67,4:1-41, “video stream 202” includes sequences of “i-, b-, and p-frames”) a first set of predictive-coded frames to be bi- predictive-coded (3:65-67,4:1-41, and fig. 2, “b-frames” included in the video stream 202 relative to “i-frame at position 210” as depicted in fig. 2) and a second set of predictive-coded frames to be predictive-coded (3:65-67,4:1-41, and fig. 2, p-“frames” included in the video stream 202 relative to “i-frame at position 210” as depicted in fig. 2) in the second video stream, (3:65-67,4:1-41, and fig. 2, “video stream 204” modified from “insertion of i-frames at positions 230, 240, and 250” which includes only i-frames and p-frames as depicted in fig. 2) But does not explicitly teach, wherein the generating the second video stream further comprises: transcoding the first set of predictive-coded frames into bi-predictive-coded frames in the second video stream; and transcoding the second set of predictive-coded frames into predictive-coded frames in the second video stream. However, Lin teaches additionally, determining, from the first video stream, (¶37-40 and fig. 8, saved decoded incoming “input bit-streams” in full frame memory 811) a first set of predictive-coded frames to be bi-predictive-coded (¶37-40 and fig. 8, “B-frames” sent to “spatial downscaled DCT-MCdec” 815 as depicted in fig. 8) in the second video stream (¶37-40 and fig. 8, downscaled “B-frames” output from “reduced DCT-MC 801a” sent to “adder” for encoding as depicted in fig. 8) and a second set of predictive-coded frames to be predictive-coded (¶37-40 and fig. 8, “P-frames” sent to “full DCT-MCdec” 813 as depicted in fig. 8) in the second video stream, (¶37-40 and fig. 8, full picture resolution “P-frames” output from “reduced DCT-MC 801a” sent to “adder” for encoding as depicted in fig. 8) wherein the generating the second video stream (¶37-40 and fig. 8, output full-resolution “I-frames”, full-resolution “P-frames”, and spatial downscaled “B-frames” output for encoding as depicted in fig. 8) further comprises: transcoding the first set of predictive-coded frames into bi-predictive-coded frames in the second video stream; (¶37-40 and fig. 8, transcoding scheme at “spatial downscaled DCT-MCdec 815” performs “DCT-MC downscaling for B-frames in the decoder-loop”) and transcoding the second set of predictive-coded frames into predictive-coded frames in the second video stream. (¶37-40 and fig. 8, transcoding scheme at “full DCT-MCdec 813” performs “discrete cosine transform and motion compensation of full resolution for P-frames”) It would have been obvious to one with ordinary skill in the art at the time of the filing date of the claimed invention to combine the video transcoding of Shankarappa with the transcoder of Ozawa with the video downscaling of Lin which performs reduced resolution decoding for B frames. This can lead to significant computation savings since B-frames occupy a large portion of an I-B-P structured video. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JIMMY S LEE whose telephone number is (571)270-7322. The examiner can normally be reached Monday thru Friday 10AM-8PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Joseph G. Ustaris can be reached at (571) 272-7383. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JOSEPH G USTARIS/Supervisory Patent Examiner, Art Unit 2483 /JIMMY S LEE/Examiner, Art Unit 2483
Read full office action

Prosecution Timeline

Feb 14, 2025
Application Filed
Jan 21, 2026
Non-Final Rejection — §103, §DP
Jan 30, 2026
Interview Requested
Feb 09, 2026
Applicant Interview (Telephonic)
Feb 09, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604034
METHOD FOR PARTITIONING BLOCK AND DECODING DEVICE
2y 5m to grant Granted Apr 14, 2026
Patent 12596190
MILLIMETER WAVE DISPLAY ARRANGEMENT
2y 5m to grant Granted Apr 07, 2026
Patent 12581086
MERGE WITH MVD BASED ON GEOMETRY PARTITION
2y 5m to grant Granted Mar 17, 2026
Patent 12563112
SPATIALLY UNEQUAL STREAMING
2y 5m to grant Granted Feb 24, 2026
Patent 12554017
EBS/TOF/RGB CAMERA FOR SMART SURVEILLANCE AND INTRUDER DETECTION
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
56%
Grant Probability
84%
With Interview (+28.1%)
3y 7m
Median Time to Grant
Low
PTA Risk
Based on 302 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month