Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
2. This is the initial Office Action based on the application filed on September 16, 2024. The Examiner acknowledges the following:
3. Claims 1 – 20 were filed by Applicant.
4. The drawings filed on 09/16/2024 are accepted by the Examiner.
5. Current claims 1 – 20 are pending and they are being considered for examination.
Information Disclosure Statement
6. The IDS documents filed on filed on 10/30/2025, 05/08/2025 and 09/30/2025 are acknowledged by the Examiner.
Priority
7. Priority data is based on a provisional patent application #63/644492, filed on05/08/2024, which is the priority date for this application.
Claim Rejections - 35 USC § 102
8. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1, 2, 6, 10, 11 and 13 – 20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by “Debarati Kundu et al., WO-2023/163799 A1, hereinafter Kundu”. (Note: Kundu art is from the IDS and it is provided by Applicant).
Regarding Claims 1, 13 and 19:
Kundu teaches a method and apparatus for generating one or more frames, comprising: capturing, using an image sensor, sensor data for a frame associated with a scene; generating a first portion of the frame based on information corresponding to a region of interest (ROI), the first portion having a first resolution; generating a second portion of the frame, the second portion having a second resolution that is lower than the first resolution; and outputting the first portion of the frame and the second portion of the frame, wherein the image sensor outputs the first portion of the frame and the second portion of the frame and , further comprising: receiving a mask associated with the scene, wherein the mask includes the information corresponding to the ROI associated with a previous frame, wherein the mask is determined based on at least one of gaze information of a user, a predicted gaze of the user, an object detected in the scene, a depth map generated for the scene, and a saliency map of the scene and, further comprising generating, using an image signal processor, an output frame at least in part by combining the first portion of the frame and the second portion of the frame and, further comprising processing, using an image signal processor, the first portion of the frame based on first one or more parameters to improve visual fidelity of the first portion and refraining from processing of the second portion of the frame.
As for claim 1, Kundu teaches
A method (Fig 9 shows a flow chart illustrating an example of a process 900 for generating one or more frames using one or more of the foveated sensing techniques. See [0011 – 00113]), comprising: capturing sensor information with a plurality of sensor pixels in an array of sensor pixels (Fig 9, block 902 for capturing sensor information including a plurality of sensor pixels from the array or image sensor 130, as seen in Fig 1 (See [0041]) and which includes one or more arrays of photodiodes); binning (Fig 7B, binner 756 receives the raw digital pixels from the ADC 752 and a control signal from the foveal controller 754 and generates a low-resolution or binned image 704. See [00105]), within the array of sensor pixels prior to readout of the sensor pixels and based on a region-of-interest (ROI) (See [00111]) indicator in a field-of-view of the array of sensor pixels, the sensor information of at least a first sensor pixel and a second sensor pixel of plurality the sensor pixels (Fig 2B and Fig 3 show an example of a binning pattern 205 resulting from the application of a binning process to the quad color filter array 200 (See [0052 – 0055]). Fig 9, block 908 includes generating a second portion of the frame. The second portion has a second resolution that is lower than the first resolution. In some cases, the first portion of the frame is a first version of the frame having the first resolution and the second portion of the frame is a second version of the frame having the second resolution, in which case the first version and the second version are different frames having different resolutions. In some cases, the process 900 includes combining a plurality of pixels of the sensor data (e.g., using binning, such as that described above with respect to FIG. 2A-2B or FIG. 3) in the image sensor such that the second portion of the frame has the second resolution. See [0090; 00112]); and reading out (Fig 9, block 910 corresponds to the readout and it includes outputting the first portion of the frame and the second portion of the frame from the image sensor. See [00113]), from the array of sensor pixels for a single image frame: a single binned value for the first sensor pixel and the second sensor pixel; and an individual pixel value for at least a third sensor pixel of the array of sensor pixels (Fig 9, blocks 906 and 908 indicate generating a first portion of the frame for the ROI; the first portion of the frame has a first resolution (block 906) and generating a second portion of the frame using binning as shown in Fig 2B and Fig 3. See [00112]).
As for claim 13, Kundu teaches,
A method (Fig 9 shows a flow chart illustrating an example of a process 900 for generating one or more frames using one or more of the foveated sensing techniques. See [0011 – 00113]), comprising: capturing sensor information with a plurality of sensor pixels in an array of sensor pixels of an image sensor Fig 9, block 902 for capturing sensor information including a plurality of sensor pixels from the array or image sensor 130, as seen in Fig 1 (See [0041]) and which includes one or more arrays of photodiodes); binning (Fig 7B, binner 756 receives the raw digital pixels from the ADC 752 and a control signal from the foveal controller 754 and generates a low-resolution or binned image 704. See [00105]), wherein the image sensor comprises a color filter array having a type (Fig 1, image sensor 130 In some cases, different photodiodes may be covered by different color filters of a color filter array, and may thus measure light be matching the color of the color filter covering the photodiode. Various color filter arrays can be used, including a Bayer color filter array, a quad color filter array (also referred to as a quad Bayer filter), and/or other color filter array. FIG. 2A. See [0041; 0051]); and providing, by the image sensor, a sensor output including a subset of the sensor information and having a resolution, in at least one dimension, that is based on a region-of-interest (ROI) indicator and the type of the color filter array (Fig 6, mask 616 that identifies an ROI (salient region) that can be used with the image data to generate two different portions of a single frame; binning can include combining adjacent pixels, which improves SNR and the ability to increase frame rate, but reduces the resolution of the image (See [0090]). As for the color filter array, Fig 2A, 200 shows the color filter for red, green and blue as the type of the color filter array. See [0041]).
As for claim 19, Kundu teaches,
A device (Fig 1 shows an image capturing and processing system 100. See [0036 – 0050]), comprising: an image sensor comprising a plurality of sensor pixels in an array of sensor pixels (Fig 1, image sensor 130 (See [0041]) with a plurality of pixels, which are to be binned as seen in Fig 3. See [0041; 0055]), the image sensor configured to: capture sensor information with the plurality of sensor pixels in the array of sensor pixels (Fig 9, block 902 for capturing sensor information including a plurality of sensor pixels from the array or image sensor 130, as seen in Fig 1 (See [0041]); bin Fig 2B and Fig 3 show an example of a binning pattern 205 (See [0052 – 0055]). Fig 7B, binner 756 receives the raw digital pixels from the ADC 752 and a control signal from the foveal controller 754 and generates a low-resolution or binned image 704 (See [00105])), within the array of sensor pixels prior to readout of the sensor pixels and based on a region-of-interest (ROI) (See [00111]) indicator in a field-of-view of the array of sensor pixels, the sensor information of at least a first sensor pixel and a second sensor pixel of plurality the sensor pixels (Fig 2B and Fig 3 show an example of a binning pattern 205 resulting from the application of a binning process to the quad color filter array 200 (See [0052 – 0055]). Fig 9, block 908 includes generating a second portion of the frame. The second portion has a second resolution that is lower than the first resolution. In some cases, the first portion of the frame is a first version of the frame having the first resolution and the second portion of the frame is a second version of the frame having the second resolution, in which case the first version and the second version are different frames having different resolutions. In some cases, the process 900 includes combining a plurality of pixels of the sensor data (e.g., using binning, such as that described above with respect to FIG. 2A-2B or FIG. 3) in the image sensor such that the second portion of the frame has the second resolution. See [0090; 00112]); and read out (Fig 9, block 910 corresponds to the readout and it includes outputting the first portion of the frame and the second portion of the frame from the image sensor. See [00113]), from the array of sensor pixels for a single image frame: a single binned value for the first sensor pixel and the second sensor pixel; and an individual pixel value for at least a third sensor pixel of the array of sensor pixels (Fig 9, blocks 906 and 908 indicate generating a first portion of the frame for the ROI; the first portion of the frame has a first resolution (block 906) and generating a second portion of the frame using binning as shown in Fig 2B and Fig 3. See [00112]).
Regarding Claim 20:
The rejection of claim 19 is incorporated herein. As for claim 20 limitations , Fig 7B, binner 756 receives the raw digital pixels from the ADC 752 and a control signal from the foveal controller 754 and generates a low-resolution or binned image 704 (See [00105]). Fig 10, teaches an example of a process 1000 for generating one or more frames using one or more of the foveated sensing techniques described herein. The process 1000 can be performed by an ISP (e.g., the ISP 154 or image processor 150 of FIG. 1 (See [00114]). In block 1002, includes the receiving of an image from the image sensor 130 (See [00115]). In block 1004, it generates a first version of the ROI associated with the scene and the bitmap as in Fig 8 includes a first pixel value for pixels of the frame associated with the ROI and a second pixel value for pixels of the frame outside the ROI. In block 1006, it includes generating a second version of the frame having a second resolution, which is lower than the first resolution. It also teaches generating an output frame using the ISP, a GPU processor and at least in part by combining the first version of the frame and the second version of the frame, which would imply in outputting a single image frame with uniform resolution in both directions horizontal and vertical. See [00117]).
Regarding Claim 14:
The rejection of claim 13 is incorporated herein. As for claim 14, Fig 2A, 2B and Fig 3 shows an example of binning in the Bayer pattern for multiple different color filters along a horizontal and a vertical direction. Since Kundu the method as in claim 13, which can be applied to any image sensor with different color filter arrays with different resolutions, what means that the resolution can include the same or uniform resolution for both directions. See [0051 – 0055].
Regarding Claim 15:
The rejection of claims 13 and 14 is incorporated herein. As shown for claim 14 rejection, Figs 2A, 2B, 3 include an example of a binning pattern 205 and it includes multiple color filters of the same color in different directions and wherein the binning process can be applied for the color filter array(s) with different resolution, which means that a non-uniform resolution can possibly occur for some areas of the array as for some column or row. See [0051 – 0055].
Regarding Claim 16:
The rejection of claim 13 is incorporated herein. As for claim 16 limitations, Kundu teaches in Fig 7A, the ROI detector/indicator 720. See [0099].
Regarding Claims 17 – 18:
The rejection of claims 13 and 16 is incorporated herein. As for claim 17 and claim 18 limitations. Kundu Fig 2A, Fig 2B include an example of a binning pattern 205 and it includes multiple color filters of the same color in different directions and wherein the binning process can be applied for the color filter array(s) with different resolution (See [0051; 0052]). For the quad Bayer type color filter, Fig 2B (See [0052]).
Regarding Claim 2:
The rejection of claim 1 is incorporated herein. As for claim 2 limitations, Kundu Fig 6B and 7A, teaches and extended regality system XR 610 (Fig 6B) can include a collection of sensors 630 which can provide a motion information to the perceive stack 642 of the ISP to process sensor information and synthesize information for detecting the ROI (See [0094]). Fig 7A, block 710 which includes a plurality of sensors which can provide in the processing of the motion information to the perception engine 708 to determine gaze information and the ROI detector 720 can predict a ROI based on the gaze information (See [0099]).
Regarding Claim 6:
The rejection of claim 1 is incorporated herein. As for claim 6 limitations, Kundu, Fig 2B and Fig 3 shows the pixels of the first sensor, for example red and the green pixel are disposed in the same row or the same column fir red and green. (See [0052 – 0055]).
Regarding Claim 10:
The rejection of claim 1 is incorporated herein. As for claim 10 limitations, Kundu, Fig 3, shows a diagram illustrating an example of a binning process applied to a Bayer pattern of a Bayer color filter array 300. As shown, the binning process bins the Bayer pattern by a factor of two both along the horizontal and vertical direction. For example, taking groups of two pixels in each direction (as marked by the arrows illustrating binning of a 2×2 set of red (R) pixels, two 2×2 sets of green (Gr) pixels, and a 2×2 set of blue (B) pixels), a total of four pixels are averaged to generate an output Bayer pattern that is half the resolution of the input Bayer pattern of the Bayer color filter array 300. The same operation may be repeated across all of the red, blue, green (beside the red pixels), and green (beside the blue pixels) channels (See [0055]).
Regarding Claim 11:
The rejection of claim 1 is incorporated herein. As for claim 11 limitations, Kundu, Fig 10, teaches an example of a process 1000 for generating one or more frames using one or more of the foveated sensing techniques described herein. The process 1000 can be performed by an ISP (e.g., the ISP 154 or image processor 150 of FIG. 1 (See [00114]). In block 1002, includes the receiving of an image from the image sensor 130 (See [00115]). In block 1004, it generates a first version of the ROI associated with the scene and the bitmap as in Fig 8 includes a first pixel value for pixels of the frame associated with the ROI and a second pixel value for pixels of the frame outside the ROI. In block 1006, it includes generating a second version of the frame having a second resolution, which is lower than the first resolution. It also teaches generating an output frame using the ISP, a GPU processor and at least in part by combining the first version of the frame and the second version of the frame, which would imply in outputting a single image frame with uniform resolution in both directions horizontal and vertical. See [00117]).
Regarding Claim 12:
The rejection of claim 1 is incorporated herein. As for claim 12 limitations, Kundu Fig 9 teaches process 900 for generating one or more frames using one or more of the foveated sensing techniques, wherein the process 900 can be performed in the image sensor 130 of Fig 1. Block 902 includes the capturing of images using image sensor data 603 of Fig 6A or sensor 614 of Fig 6B for frames of the scene (See [00111]). Block 904 includes obtaining information for the ROI associated with the scene and using a mask that includes a bitmap as seen in Fig 8 including pixels of the frame including a first pixel value for pixels of the frame associated with the ROI and second pixel value for pixels outside the ROI (See [00111]). Block 906 includes generating a first portion of the frame for the ROI, with first resolution. Block 908 includes generating a second portion of the frame, with a second resolution, which is lower than the first resolution. In some cases, includes combining a plurality of pixels of the sensor data using binning as in Fig 2A and 2B in the image sensor such that the second portion of the frame has the second resolution (See [00112]). Block 910 includes outputting the first portion of the frame and the second portion of the frame from the image sensor and outputting the first portion of the frame and the second portion of the frame, which includes outputting the first portion of the frame using a first virtual channel and outputting the second portion of the frame using a second virtual channel. In some aspects, the process 900 includes generating an output frame using ISP or a GPU processor ate least in part by combining the first portion of the frame and the second portion of the frame (See [00113]).
Claim Rejections - 35 USC § 103
9. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103, which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 3 – 5 and 7 are rejected under 35 U.S.C. 103 as being unpatentable over “Debarati Kundu et al., WO-2023/163799 A1, hereinafter Kundu”. in view of “Xin Yang, US 2024/0114266 A1, hereinafter Yang”. (Note: both arts are from the IDS document).
Regarding Claim 3:
The rejection of claim 1 is incorporated herein. Kundu teaches claim 1 limitations. As for claim 3 limitations, Kundu Fig 3 shows an example of binning process applied to a Bayer pattern color filter array 300, where it is taking groups of pixels in each direction, wherein the same operation is carried out for the red, the green pixels and the blue pixels; however, Kundu does not teach that the pixels being binned share a single floating diffusion region, which in the same field of endeavor is taught by Yang. Yang teaches an image array with binning process, wherein 4 pixels of two different columns as in Fig 8, sub-pixels 110 W1, W2, W3 and W4 in an operation of full-resolution (first mode) share floating diffusion FD1 share a sing floating diffusion region. See [0069]; 0070].
Therefore, by modifying Kundu with the pixel regions to share the same floating diffusion which allows for different modes of operation: full-resolution output mode, the primary merging output mode and the secondary merging output mode and the comparison of the advantages is disclosed in Table 1 (See Yang [0107; 0108] and Table 1.
Regarding Claims 4 and 5:
The rejection of claims 1 and 3 is incorporated herein. Kundu teaches claim 1 as for binning in a vertical and horizontal directions and combined with Yang, it teaches claim 3. As for claims 4 and 5 limitations, Yang Fig 2, shows a large pixel array, which includes a plurality of pixels, which can accommodate a fifth sensor pixel. See [0041 – 0045]. As for reading a single data line, Yang Fig 2 shows a large pixel array with can output multiple image pixel data. Fig 6 and 7 shows the reading of a pixel column (vertical direction) as in column COL1 and another as in COL2. See [0069]).
Therefore, by modifying Kundu with the pixel regions to share the same floating diffusion which allows for different modes of operation: full-resolution output mode, the primary merging output mode and the secondary merging output mode and the comparison of the advantages is disclosed in Table 1 (See Yang [0107; 0108] and Table 1.
Regarding Claim 7;
The rejection of claim 1 is incorporated herein. Kundu teaches claim 1. As for claim 7 limitations, Yang, Yang Fig 2, shows a large pixel array, which includes a plurality of pixels, which can accommodate a fifth sensor pixel. See [0041 – 0045]. Fig 11, shows the primary merging output mode that includes an analog averaging mode, wherein the image sensor includes a second conversion circuit 142 corresponding to each pixel and it is arranged between the output end of the pixel circuit 111 and the input end of first conversion circuit 141. The second conversion circuit 142 is configured to perform an operation of the analog averaging mode on the plurality of analog signals output by the pixel circuit 111. Specifically, the second conversion circuit may include a switch S44 connected between the first column control line COL1 and the second column control line COL2. See [0080].
Therefore, by modifying Kundu with the pixel regions to share the same floating diffusion which allows for different modes of operation: full-resolution output mode, the primary merging output mode and the secondary merging output mode and the comparison of the advantages is disclosed in Table 1 (See Yang [0107; 0108] and Table 1.
Claims 8 and 9 are rejected under 35 U.S.C. 103 as being unpatentable over “Debarati Kundu et al., WO-2023/163799 A1, hereinafter Kundu”. in view of “Zhe Gao et al., US 2022/0201236 A1, hereinafter Gao”. (Note: both arts are from the IDS document).
Regarding Claim 8:
The rejection of claim 1 is incorporated herein. Kundu teaches claim 1; however, it fails to teach or to fairly suggest some of the additional limitations of claim 8, which in the same field of endeavor is taught by Gao. Gao teaches an optical sensor, comprising: a plurality of pixel cells arranged into rows and columns of a pixel array, wherein each of the pixel cells comprises: a plurality of photodiodes configured to photogenerate charge in response to incident light; and a source follower transistor configured to generate an image data signal in response to the charge photogenerated from the plurality of photodiodes; an image readout circuit coupled to the plurality of pixel cells to read out the image data signal generated from the source follower transistor of at least a first one of the plurality of pixel cells of a first row of the pixel array; and an event driven circuit coupled to the plurality of pixel cells to read out the event data signals generated in response to the charge photogenerated from the plurality of photodiodes of a second row of the plurality of pixel cells of the pixel array, wherein the image readout circuit is coupled to read out the image data signal and the event driven circuit is coupled to read out the event data signals from pixel array simultaneously. As for claim 8 limitations, Gao teaches in Fig 2C the readout of a pixel array 208 with 2X fast vertical binning and 2Xdigital binning of 4C (2 x 2) photodiode groupings. As seen in Fig 2C, four 4C (2 x 2) groupings of photodiodes 218 of the same color (e.g. red, green blue) may be read out together with Bayer binning. In the example, 2× fast vertical binning is performed on each pair of 4C (2×2) groupings of photodiodes 218 of the same color that are coupled to the same column bitline to perform the vertical binning. Furthermore, 2× digital horizontal binning is performed on each pair of 4C (2×2) groupings of photodiodes 218 of the same color in the same row. In one example, the 2× digital horizontal binning may be performed after the analog to digital conversion (ADC) is performed in the image readout circuit. With the 2× fast vertical binning and the 2× digital horizontal binning of the four 4C (2×2) groupings of photodiodes 218 of the same color, a 4-megapixel image/video may be captured from pixel array 208 (See [0029]).
Therefore, by modifying Kundu with the pixel readout as taught by Gao, which includes the image readout circuit to be coupled to read out the image data signal and the event driven circuit that is coupled to read out the event data signals from the pixel array simultaneously (See Gao Abstract).
Regarding Claim 9:
The rejection of claims 1 and 8 is incorporated herein. Kundu teaches claim 1 and combined with Gao teaches claim 8. Gao teaches in Fig 2C the readout of a pixel array 208 with 2X fast vertical binning and 2Xdigital binning of 4C (2 x 2) photodiode groupings. As seen in Fig 2C, four 4C (2 x 2) groupings of photodiodes 218 of the same color (e.g. red, green blue) may be read out together with Bayer binning. In the example, 2× fast vertical binning is performed on each pair of 4C (2×2) groupings of photodiodes 218 of the same color that are coupled to the same column bitline to perform the vertical binning. Furthermore, 2× digital horizontal binning is performed on each pair of 4C (2×2) groupings of photodiodes 218 of the same color in the same row. In one example, the 2× digital horizontal binning may be performed after the analog to digital conversion (ADC) is performed in the image readout circuit. With the 2× fast vertical binning and the 2× digital horizontal binning of the four 4C (2×2) groupings of photodiodes 218 of the same color, a 4-megapixel image/video may be captured from pixel array 208 (See [0029]). As for the additional limitations of claim 9, Kundu Fig 7B shows binner 756 (See [00105]) which is configured to receive the raw digital pixels from the ADC 752 and a control signal from the foveation controller 754 and generated a low-resolution image 704 (or binned image).
Therefore, by modifying Kundu with the pixel readout as taught by Gao, which includes the image readout circuit to be coupled to read out the image data signal and the event driven circuit that is coupled to read out the event data signals from the pixel array simultaneously (See Gao Abstract).
Contact
10. Any inquiry concerning this communication or earlier communications from the examiner should be directed to LIN YE whose telephone number is (571)272-7372. The examiner can normally be reached on Monday-Friday, 8:00am - 5:00pm.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Lin Ye can be reached on 571-272-7372. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MARLY S CAMARGO/Primary Examiner, Art Unit 2638