DETAILED ACTION
Claims 1-20 are pending
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 07/08/2025 has been considered by the Examiner.
Claim Objections
Claim 12, line one is objected to because of the following informalities: “wherein the at least in processer“. There is a typo where ‘In’ should most likely say ‘one’ Appropriate correction is required.
Claim 15, line one is objected to because of the following informalities: “wherein the at least on processer”. There is a typo where ‘on’ should most likely say ‘one’ Appropriate correction is required.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Vidanagamachchi et al. US 20240107086 (hereinafter "Vidanagamachchi", cited in the IDS) in view of Krishnan US 20190379893 (hereinafter "Krishnan", cited in the IDS).
Regarding claim 1, Vidanagamachchi discloses a method of generating one or more frames [See paragraph 0011 which discusses a subframe with a region of interest and an encoded downscaled image frame which are combined into a combined image frame therefore generating a frame],
PNG
media_image1.png
267
420
media_image1.png
Greyscale
comprising: capturing, using an image sensor, sensor data for a frame associated with a scene [See paragraph 0036 and FIG 6 item 600 which showcases the content delivery system, which captures the initial frame from image steams coming from the camera (640) and sensors (660)];
PNG
media_image2.png
102
440
media_image2.png
Greyscale
PNG
media_image3.png
222
417
media_image3.png
Greyscale
PNG
media_image4.png
679
561
media_image4.png
Greyscale
generating a first portion of the frame from the sensor data based on information corresponding to a first region of interest (ROI), the first portion having a first resolution [See paragraph 0011 (above) where a subframe with a region of interest is selected. Region of interest can also be called fovea. This subframe has an original resolution as seen in paragraph 0036 (above)];
downsampling a second portion of the frame from the sensor data to a second resolution that is lower than the first resolution [see FIG. 3 where an image frame is downscaled to a second quality that is lower than the original
PNG
media_image5.png
795
604
media_image5.png
Greyscale
compressing the first portion of the frame based on information in the second portion of the frame corresponding to the first ROI [Paragraph 0037 discusses downscaling a portion of the frame, and generating a new frame based on the region of interest, also called a fovea];
PNG
media_image6.png
347
410
media_image6.png
Greyscale
and outputting the compressed first portion of the frame and the second portion of the frame [See paragraph 0011 (above) where encoded and decodes subframes are combined in output].
While Vidanagamachchi teaches having a first and second portion of the frame at a first and second resolution where the second resolution is lower than the first, it does not explicitly disclose that the second field of view is larger than the first field of view. Krishnan does disclose that the field of view of the first portion associated with the ROI is smaller than that of the second portion. [See Fig 3A of Krishnan where 311 represents the ROI i.e. the first portion of the frame and 340 represents the second portion of the frame which is at a lower resolution and has a width W1 which is larger than the width of the first portion X1]
PNG
media_image7.png
902
666
media_image7.png
Greyscale
Vidanagamachchi and Krishnan are analogous art because they are from the same field of endeavor of compressing image data based on a fovea or ROI determines using gaze tracking, so that some portions of a frame are at different resolutions than others.
Before the effective filing date of the invention, it would have been obvious to one of ordinary skill in the art to combine Vidanagamachchi’s invention with that of Krishnan to ensure that the second FOV is larger than the first FOV. The motivation would be to provide high resolution to only the priority areas of the frame: regions that contain the ROI, which would be smaller in size than the second FOV which does not contain the ROI. Limiting the size of the FOV for the first portion helps to reduce bandwidth and power consumption [Krishnan 0002 and 0013].
Regarding claim 2, Vidanagamachchi teaches that compressing the first portion of the frame involves using difference encoding that subtracts from pixels in second portion of the frame. [See paragraph 0052 and fig 4 block 450 and 460 of Vidanagamachchi where the pixels are encoded using differences which are equivalent to subtraction]. While Vidanagamachchi does not explicitly state the generation of residual values, Krishnan, which perform the same operation, does [See Fig 6 of Krishnan which discloses subtraction creating residual values used in compression.]
PNG
media_image8.png
62
405
media_image8.png
Greyscale
PNG
media_image9.png
217
401
media_image9.png
Greyscale
----
PNG
media_image10.png
685
645
media_image10.png
Greyscale
PNG
media_image11.png
832
670
media_image11.png
Greyscale
Regarding claim 3, Vidanagamachchi teaches that compressing the first portion of the frame involves encoding the group of pixels in the frame using a compression algorithm [See paragraph 0021 which discusses that encoding is used to compress the image]
PNG
media_image12.png
389
431
media_image12.png
Greyscale
Regarding claim 4, Vidanagamachchi discloses decompressing the compressed first portion of the frame based on information in the second portion of the frame corresponding to the first ROI. [See paragraph 0011 (above) where the portion of the frame associated with the ROI is decompressed based on another frame]
Regarding claim 5, Vidanagamachchi discloses using differences to compress the frame, but does not explicitly disclose adding the pixels to reconstruct the frame, although using this method to decode the compression is implied. [See paragraph 0052 of Vidanagamachchi (above)] However, Krishnan discloses decompressing the compressed first portion of the frame involves adding, to each value in the group of residual values, a value of a pixel in the second portion of the frame to generate a reconstructed pixel value for each residual value in the group of residual value. [See Fig 6 (above) box 606 of Krishnan which specifically discloses adding residual pixels to generate a reconstructed pixel value.]
Regarding claim 6, Vidanagamachchi discloses that the image sensor outputs the compressed first portion of the frame and the second portion of the frame to an image signal processor [See Fig 2, 200, where the compressed frame and subframes entered the playback system to be processed. The playback system acts as an image signal processer].
PNG
media_image13.png
717
1027
media_image13.png
Greyscale
Regarding claim 7, Vidanagamachchi discloses an image signal processor that outputs the compressed first portion of the frame and the second portion of the frame to a frame buffer [As discussed in paragraph 0026 and 0073, the frames can be processed in accordance to a HEVC standard to create a compressed video bitstream, therefore requiring frames to be stored in RAM before being output, which is essentially a frame buffer].
PNG
media_image14.png
208
321
media_image14.png
Greyscale
PNG
media_image15.png
151
317
media_image15.png
Greyscale
PNG
media_image16.png
86
325
media_image16.png
Greyscale
Regarding claim 8, Vidanagamachchi discloses decompressing the compressed first portion of the frame based on the second portion of the frame; [See paragraph 0011 (above) where the Subframe associated with the fovea is decompressed] and synthesizing the first portion of the frame and the second portion of the frame into a single frame [see 0046 where the playback system generates a combined image frame based on the multiple subframes of differing resolutions]
PNG
media_image17.png
87
321
media_image17.png
Greyscale
PNG
media_image18.png
177
325
media_image18.png
Greyscale
Regarding claim 9, Krishnan discloses that an image signal processor decompresses the compressed first portion of the frame and processes the first portion of the frame based on the second portion of the frame at a front end of the image signal processor [See paragraph 0109 where the decompressing is happening at the local processer. The local processer is equivalent to a frontend, or client-side processer]
PNG
media_image19.png
220
284
media_image19.png
Greyscale
PNG
media_image20.png
212
286
media_image20.png
Greyscale
Regarding claim 10, Krishnan discloses that an image signal processor decompresses the compressed first portion of the frame and processes the first portion of the frame based on the second portion of the frame at an offline engine of the image signal processor [See paragraph 0109 (above) where the decompressing is happening at the local processer. The local processer is equivalent to an offline engine as the processing does not require network connection.]
Claim 11 is similarly analyzed to claim 1.
Claim 12 is similarly analyzed to claim 2.
Claim 13 is similarly analyzed to claim 3.
Claim 14 is similarly analyzed to claim 4.
Claim 15 is similarly analyzed to claim 5.
Claim 16 is similarly analyzed to claim 6.
Claim 17 is similarly analyzed to claim 7.
Claim 18 is similarly analyzed to claim 8.
Claim 19 is similarly analyzed to claim 9.
Claim 20 is similarly analyzed to claim 10.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANUSHA KASHYAPA whose telephone number is (571)272-8766. The examiner can normally be reached Monday-Friday 8am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chan Park can be reached at (571) 272-7409. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ANUSHA KASHYAPA/Examiner, Art Unit 2669 /CHAN S PARK/Supervisory Patent Examiner, Art Unit 2669