DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
The Amendment filed 11/4/2025 has been entered.
Claim Status
Claims 1-20 are pending in this Office Action.
Claims 1-2, 4-7, 9, 12, 15, and 17-18 are amended.
Allowable Subject Matter
Claim 2 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Response to Arguments
Applicant’s arguments with respect to claim 1 have been fully considered, but are not persuasive. The reasons set forth below.
Applicant argues Su fails to teach "identifying, based on the respective determined region saliency scores, a set of important regions from the plurality of regions, the set of important regions comprising at least a first region and a second region," "generating first mitigation data using a first loss mitigation method for the first region" "generating second mitigation data using a second loss mitigation method, different from the first loss mitigation method, for the second region;" and "transmitting an encoded video stream for the frame including the first mitigation data for the first region and the second mitigation data for the second region," because Su’s teaching of "strengths of FEC protection" cannot reasonably be interpreted as teaching two separate methods.
The examiner respectfully disagrees. Su teaches using generating coded packets using different strengths of FEC protection by way of assigning different total numbers of coded packets based on priorities. For example, a highest priority image block may be processed using a first loss mitigation method of assigning the strongest FEC protection strength by way of assigning the highest ratio between a total number of coded packets transmitted for the first image block and a total number (fixed or constant for each of the columns of FIG. 2B) of source packets generated for the first image block. While a second image block having a second highest priority may be processed using a second loss mitigation method of assigning a lower FEC protection strength for the second image block by way of assigning a lower ratio between a total number of coded packets transmitted for the second image block and a total number (fixed or constant for each of the columns of FIG. 2B) of source packets generated for the second image block (par. 122-125, Fig. 2B). This demonstrates "generating first mitigation data using a first loss mitigation method for the first region" "generating second mitigation data using a second loss mitigation method, different from the first loss mitigation method, for the second region;".
Applicant’s arguments with respect to claim 9 have been fully considered, but are not persuasive. The reasons set forth below.
Applicant argues Su fails to teach "determine a frame saliency value for the frame;" "determine a region saliency value for at least one region of the plurality of regions;" "generate, for one or more regions of the plurality of regions determined to be important regions based on a determination that respective region saliency values corresponding to the plurality of regions at least meet or exceed the frame saliency value, one or more mitigation packets for data packets associated with the respective one or more regions," because Su fails to teach or suggest independent determinations of frame saliency and region saliency that may be used to determine whether or not a region is an "important region."
The examiner respectfully disagrees. Su teaches the spatially downsampled entire image TDA (frame) may have the highest priority 0 as shown in Fig. 3C and 3E (par. 174). This demonstrates "determine a frame saliency value for the frame;". Su further teaches identifying the respective priority values of the image blocks indicating how important each block is (par. 63 and 169, Fig. 3C). This demonstrates "determine a region saliency value for at least one region of the plurality of regions;". Su further teaches the image block 0 having the highest priority 0 and image blocks 1 and 4 (TD1 and TD4) having the second highest priority 1 (par. 174-175, Fig. 3C). In other words, there is a determination that image blocks 1 and 4 have a priority value higher than the image block 0 (the frame) priority value. Further, Su teaches that image blocks having higher priorities, indicated that they are relatively important (par. 63). Therefore, Su teaches independent determinations of frame saliency and region saliency that may be used to determine whether or not a region is an "important region."
Applicant’s arguments with respect to claim 15 have been considered, but are moot in view of the new grounds of rejection. The reasons set forth below.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1, 3-5, 7-9, 11, and 13-14 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Su et al. (WO 2024/167859).
Regarding claim 1, Su teaches: A computer-implemented method [a computing device causes performance of a method (par. 214)], comprising:
segmenting a frame into a plurality of regions [partitioning an image into blocks (par. 165, Fig. 3B and 4A)]
determining a region saliency score for each region of the plurality of regions based on a saliency map for the frame [a priority map indicates or identifies the respective priority scores of the image blocks indicating how important each block is (par. 63 and 169, Fig. 3C)]
identifying, based on the respective determined region saliency scores, a set of important regions of the plurality of regions, the set of important regions comprising at least a first region and a second region [Image blocks having higher priority indicate that they are relatively important areas or regions (par. 63). The lower the number indicated in the lower portion of FIG.3C, the higher the priority of the image block in the corresponding position in the upper portion of FIG. 3C, for example Image blocks having the value 1 are high priority and Image blocks having the value 2 are the next highest priority (par. 169 and 174-175, Fig. 3C)]
generating first mitigation data using a first loss mitigation method for the first region [generating coded packets using different strengths of FEC protection by way of assigning different total numbers of coded packets based on priorities. For example, a highest priority image block may be processed using a first loss mitigation method of assigning the strongest FEC protection strength by way of assigning the highest ratio between a total number of coded packets transmitted for the first image block and a total number (fixed or constant for each of the columns of FIG. 2B) of source packets generated for the first image block (par. 3, 19, 61, 63, 122-125, and 174-175, Fig. 2B and 3C)]
generating second mitigation data using a second loss mitigation method, different from the first loss mitigation method, for the second region [a second image block having a second highest priority may be processed using a second loss mitigation method of assigning a lower FEC protection strength for the second image block by way of assigning a lower ratio between a total number of coded packets transmitted for the second image block and a total number (fixed or constant for each of the columns of FIG. 2B) of source packets generated for the second image block (par. 63, 121-125 and 175-177, Fig. 2B and 3C)] and
transmitting an encoded video stream for the frame including the first mitigation data for the first region and the second mitigation data for the second region [sending an encoded tile-based video stream comprising the image blocks and the FEC data in the coded packets (par. 60-62 and 171, Fig. 1A)].
Regarding claim 3, Su teaches the computer-implemented method of claim 2; Su further teaches: determining a property of the frame; and identifying the saliency map based on the property [determining a tile/slice based image layout for the image and generating a priority map corresponding to the image layout (par. 169, Fig. 3C)].
Regarding claim 4, Su teaches the computer-implemented method of claim 1; Su further teaches: determining a level of at least one of the first loss mitigation or the second loss mitigation method based on at least one of the first region saliency score or the second region saliency score [determining a strength of the FEC to apply to image blocks based on the priorities assigned to each image block (par. 61 and 169, Fig. 3C). Provide specific levels of error protection to specific image blocks (par. 63)].
Regarding claim 5, Su teaches the computer-implemented method of claim 1; Su further teaches: determining, for the frame, a frame saliency score [determining a highest priority of 0 for the spatially downsampled image (par. 174, Fig. 3C)]
determining, for the first region of the plurality of regions, that the first region saliency score is greater than or equal to the frame saliency score [determining a priority score of 1 for an image block is greater than 0 (par. 169 and 175, Fig. 3C and 3E)] and
identifying the first region as an important region of the set of important regions in response to determining that the first region saliency score is greater than or equal to the frame saliency score [determining this image block is a second highest priority (par. 174-175, Fig. 3C and 3F). Image blocks having higher priority indicate that they are relatively important areas or regions (par. 63)].
Regarding claim 7, Su teaches the computer-implemented method of claim 1; Su further teaches: selecting, based on a first region saliency score, the first loss mitigation method for the first region; and selecting, based on a second region saliency score, the second loss mitigation method for the second region [based on the different image block priorities, different FEC protection methods are selected, such as a higher strength protection that assigns more coded packets or a lower strength protection that assigns less coded packets (par. 121-125, Fig. 2B)].
Regarding claim 8, Su teaches the computer-implemented method of claim 1; Su further teaches: the frame is part of a streaming video [video streaming (par. 41 and 60)].
Regarding claim 9, Su teaches: A processor [a processor (par. 213-214, Fig. 5)], comprising:
one or more circuits [a circuit (par. 216)] to:
generate a set of data packets for a plurality of regions of a frame [partitioning an image into blocks (par. 165, Fig. 3B and 4A). generate or derive tile/slice source (network) packets for each of the image blocks (par. 60)]
determine a frame saliency value for the frame [the spatially downsampled entire image TDA (frame) may have the highest priority 0 as shown in Fig. 3C and 3E (par. 174)]
determine a region saliency value for at least one region of the plurality of regions [identify the respective priority values of the image blocks indicating how important each block is (par. 63 and 169, Fig. 3C)]
generate, for one or more regions of the plurality of regions determined to be important regions based on a determination that respective region saliency values corresponding to the plurality of regions at least meet or exceed the frame saliency value, one or more mitigation packets for data packets associated with the respective one or more regions [Determining the image block 0 having the highest priority 0 and the image blocks 1 and 4 (TD1 and TD4) having the second highest priority 1 (par. 174-175, Fig. 3C). Having a higher priority indicates the image block is relatively important (par. 63). Generating coded packets using error protection methods to prevent lost packets, such as unequal error protection (UEP) forward error coding (FEC) to provide relatively strong FEC to image blocks with a relatively high priority (par. 3, 19, 61-63, 114, and 122-125, Fig. 2B and 3C)] and
transmit a data stream including the set of data packets and the one or more mitigation packets corresponding to the important regions [sending a stream of the coded packets including the source packets, the image blocks, and the FEC data (par. 60-62, 120-125, and 171, Fig. 1A)].
Regarding claim 11, Su teaches: the processor of claim 9; Su further teaches: the one or more mitigation packets include information for forward error correction or packet duplication [packets are generated with forward error coding (FEC) (par. 61)].
Regarding claim 13, Su teaches: the processor of claim 9; Su further teaches: the one or more circuits are further to: identify a saliency map for the frame defining saliency values for each pixel in the frame [determining a tile/slice based image layout for the image, including each image block comprising pixels and generating a priority map corresponding to the image layout (par. 80 and 169, Fig. 3C)].
Regarding claim 14, Su teaches: the processor of claim 9; Su further teaches: the processor is comprised in at least one of: a system for performing simulation operations; a system for performing simulation operations to test or validate autonomous machine applications; a system for performing digital twin operations; a system for performing light transport simulation; a system for rendering graphical output; a system for cloud gaming; a system for streaming content over a network; a system for performing deep learning operations; a system implemented using an edge device; a system for generating or presenting virtual reality (VR) content; a system for generating or presenting augmented reality (AR) content; a system for generating or presenting mixed reality (MR) content; a system incorporating one or more Virtual Machines (VMs); a system for performing operations for a conversational AI application; a system for performing operations for a generative AI application; a system for performing operations using a language model; a system for performing one or more generative content operations using a large language model (LLM); a system implemented at least partially in a data center; a system for performing hardware testing using simulation; a system for performing one or more generative content operations using a language model; a system for synthetic data generation; a collaborative content creation platform for 3D assets; or a system implemented at least partially using cloud computing resources [a system for rendering an image (par. 22). streaming content (par. 41)].
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 6 and 10 are rejected under 35 U.S.C. 103 as being unpatentable over Su et al. (WO 2024/167859) in view of Lin et al. (US 2023/0252949).
Regarding claim 6, Su teaches the computer-implemented method of claim 1; Su does not explicitly disclose: at least one region saliency score is based on pixel saliency values of pixels within the respective region.
Lin teaches: at least one region saliency score is based on pixel saliency values of pixels within the respective region [pixel values of pixel points corresponding to pixel units in the screen region in the image frame; and determines a priority of the screen region based on an average value of the pixel values of the pixel points (par. 83)].
It would have been obvious to one of ordinary skill in the art, having the teachings of Su and Lin before the effective filing date of the claimed invention to modify the method of Su by incorporating at least one region saliency score is based on pixel saliency values of pixels within the respective region as disclosed by Lin. The motivation for doing so would have been to allow higher priority regions of the image to be displayed first (Lin – par. 84). Therefore, it would have been obvious to combine the teachings of Su and Lin to obtain the invention as specified in the instant claim.
Claim 10 is rejected for the same reasons given in the above rejection of claim 6.
Claims 12, 15, and 17-20 are rejected under 35 U.S.C. 103 as being unpatentable over Su et al. (WO 2024/167859) in view of Liu et al. (US 2020/0160492).
Regarding claim 12, Su teaches: the processor of claim 9; Su does not explicitly disclose: determine the frame saliency value based on the region saliency values corresponding to the plurality of regions.
Liu teaches: determine the frame saliency value based on the region saliency values corresponding to the plurality of regions [calculating the saliency value for the image by taking the average value of the saliency values of all the pixels in the image (par. 78-79)].
It would have been obvious to one of ordinary skill in the art, having the teachings of Su and Liu before the effective filing date of the claimed invention to modify the processor of Su by incorporating determining the frame saliency value based on the region saliency values corresponding to the plurality of regions as disclosed by Liu. The motivation for doing so would have been to determine a saliency threshold (Liu – par. 79). Therefore, it would have been obvious to combine the teachings of Su and Liu to obtain the invention as specified in the instant claim.
Regarding claim 15, Su teaches: A system, comprising:
one or more processing units [one or more processors (par. 213-214, Fig. 5)] to
determine a region saliency value for a region of a frame meets or exceeds a frame saliency value of the frame [partitioning an image into blocks (par. 165, Fig. 3B and 4A). identify the respective priority values of the image blocks indicating how important each block is (par. 63 and 169, Fig. 3C)] and to
generate data mitigation packets for inclusion within an encoded video stream responsive to determining the region saliency value meets or exceeds a frame saliency value of the frame [generating coded packets that are encoded into a video stream, including using error protection methods to prevent lost packets, such as unequal error protection (UEP) forward error coding (FEC) to provide relatively strong FEC to image blocks with a relatively high priority (par. 3, 19, 21, 61-63, 114, and 122-125, Fig. 2B and 3C)].
Su does not explicitly disclose: the frame saliency value corresponding to an average value of respective region saliency values for regions within the frame.
Liu teaches: the frame saliency value corresponding to an average value of respective region saliency values for regions within the frame [calculating the saliency value for the image by taking the average value of the saliency values of all the pixels in the image (par. 78-79)].
It would have been obvious to one of ordinary skill in the art, having the teachings of Su and Liu before the effective filing date of the claimed invention to modify the system of Su by incorporating the frame saliency value corresponding to an average value of respective region saliency values for regions within the frame as disclosed by Liu. The motivation for doing so would have been to determine a saliency threshold (Liu – par. 79). Therefore, it would have been obvious to combine the teachings of Su and Liu to obtain the invention as specified in the instant claim.
Regarding claim 17, Su and Liu teach: the system of claim 15; Su further teaches: the one or more processing units are further to determine a loss mitigation method based on the region saliency value [determining a priority value of 1 for image blocks 1 and 4 and these blocks need a second highest FEC protection (par. 169 and 174-175, Fig. 3C and 3E-3F)].
Regarding claim 18, Su and Liu teach: the system of claim 17; Su further teaches: a second region saliency value for a second region within the frame meets or exceeds the frame saliency value and the one or more processing units are further to determine to use a different loss mitigation method than the loss mitigation method used for the region [determining the next priority image blocks, at the third highest priority 2, and they need the third highest FEC protection (par. 169 and 174-176, Fig. 3C and 3E-3F)].
Regarding claim 19, Su and Liu teach: the system of claim 15; Su further teaches: the data mitigation packets include information for forward error correction or packet duplication [packets are generated with forward error coding (FEC) (par. 61)].
Regarding claim 20, Su and Liu teach: the system of claim 15; Su further teaches: the system is one of: a system for performing simulation operations; a system for performing simulation operations to test or validate autonomous machine applications; a system for performing digital twin operations; a system for performing light transport simulation; a system for rendering graphical output; a system for cloud gaming; a system for streaming content over a network; a system for performing deep learning operations; a system implemented using an edge device; a system for generating or presenting virtual reality (VR) content; a system for generating or presenting augmented reality (AR) content; a system for generating or presenting mixed reality (MR) content; a system incorporating one or more Virtual Machines (VMs); a system for performing operations for a conversational AI application; a system for performing operations for a generative AI application; a system for performing operations using a language model; a system for performing one or more generative content operations using a large language model (LLM); a system implemented at least partially in a data center; a system for performing hardware testing using simulation; a system for performing one or more generative content operations using a language model; a system for synthetic data generation; a collaborative content creation platform for 3D assets; or a system implemented at least partially using cloud computing resources [a system for rendering an image (par. 22). streaming content (par. 41)].
Claim 16 is rejected under 35 U.S.C. 103 as being unpatentable over Su et al. (WO 2024/167859) in view of Liu et al. (US 2020/0160492) and further in view of Lin et al. (US 2023/0252949).
Regarding claim 16, Su and Liu teach the system of claim 15; Su and Liu do not explicitly disclose: the region saliency value corresponds to an average of pixel saliency values in the region.
Lin teaches: the region saliency value corresponds to an average of pixel saliency values in the region [determine a priority of the screen region based on an average value of the pixel values of the pixel points in the screen region (par. 83)].
It would have been obvious to one of ordinary skill in the art, having the teachings of Su and Lin before the effective filing date of the claimed invention to modify the system of Su and Liu by incorporating at least one region saliency score is based on pixel saliency values of pixels within the respective region as disclosed by Lin. The motivation for doing so would have been to allow higher priority regions of the image to be displayed first (Lin – par. 84). Therefore, it would have been obvious to combine the teachings of Su and Liu with Lin to obtain the invention as specified in the instant claim.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Alexander Boyd whose telephone number is (571)270-0676. The examiner can normally be reached Monday - Friday 9am-5pm PST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Benjamin Bruckart can be reached at 571-272-3982. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ALEXANDER BOYD/Examiner, Art Unit 2424
/BENJAMIN R BRUCKART/Supervisory Patent Examiner, Art Unit 2424