DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1, 9, 10, and 19 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Mandelli et al., US 2022/0141403.
In regard to claim 1, Mandelli et al., US 2022/0141403, discloses an imaging sensor comprising:
a pixel array (see figure 12, element 1210) including a plurality of pixel circuits (see figure 410a and 410b) (see para 56-59); and
a plurality of binning transistors (see figure 11, element 1126 and para 52),
wherein a first portion of the plurality of pixel circuits individually includes an intensity photodiode (photodiodes used in illumination intensity detecting mode) and a second portion (photodiodes used in contrast change detecting mode) of the plurality of pixel circuits individually includes an event vision sensor (EVS) photodiode (event camera/DVS photodiodes) (see para 2 and 31-35), and
wherein the plurality of binning transistors is configured to bin together at least one of the first portion or the second portion (see para 51-53).
In regard to claim 9, since Mandelli et al., US 2022/0141403, discloses an imaging sensor and its operation as described above in regard to claim 1, the method of claim 9 is also disclosed (see claim 1 above).
In regard to claim 10, Mandelli et al., US 2022/0141403, discloses method according to claim 9, wherein the first mode is a non-binning mode, and wherein, in the non-binning mode, the plurality of binning transistors are in an OFF state (see para 52).
In regard to claim 19, since Mandelli et al., US 2022/0141403, discloses an electronic device and its operation as described above in regard to claim 1, the electronic device of claim 9 is also disclosed (see claim 1 above).
Claim Rejections - 35 USC § 103
9. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103, which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1 – 5, 9 – 13 and 19 – 20 are rejected under 35 U.S.C. 103 as being unpatentable over “Nobuyuki Watanabe et al., US 2018/0167575 A1, hereinafter Watanabe” in view of “Zhengming Fu et al., US 2017/0302866 A1, hereinafter Fu” with all the circuit variations as disclosed.
Regarding Claim 1:
Watanabe teaches a solid-state imaging device, wherein the image sensor includes normal/ or intensity pixel and motion or event pixels, capable of detecting the motion of an object or an object motion event (See Abstract). As for claim 1, Watanabe teaches,
An imaging sensor (Fig 2, imaging device 1 includes imaging sensor or pixel array 40, which includes a plurality of normal or intensity pixels and a plurality of motion or event pixels 52 pixels. See [0045 – 0049]) comprising: a pixel array (Fig 4, array 40 includes a plurality of normal pixels 51, motion detection pixels 52 and event pixel 52a. See [0090; 0091]) including a plurality of pixel circuits (Fig 3A shows the pixel circuitry of the normal pixels (See [0057 – 0064]) and Fig 3B shows the circuitry of the motion/event pixels (See [0065 – 0072])]). Fig 16 shows the circuitry of a multi-pixel 55, configured to include two high-definition normal multi- pixels 541 (See left portion of the drawing) and one motion/ event detection multi-use pixel 552 (right side of the drawing); and, wherein a first portion of the plurality of pixel circuits individually includes an intensity photodiode (Fig 16, normal or normal-use pixels 541 with intensity photodiodes PD12a and PD12b with FDs FD1a and FD1b (See [0242;0243; 0244]) and a second portion of the plurality of pixel circuits individually includes an event vision sensor (EVS) photodiode (Fig 16, includes one motion detection multi-use pixel 552 that shares photodiode PD12 and it has a configuration in which each of the amplifiers 521, the bias transistor 522 and the capacitor 5231, provided in the switched capacitor amplified circuit 523 that are provided in the motion detection multi-use pixel 54 corresponding to each shared photodiode PD12. Fig 16 also includes the address event representation circuit AER 526 (See [0246; 0247; 0248; 0249; 0256]),
Even though, Watanabe teaches several transistors as part of the pixel circuitry and that the motion detection multi-use detection pixel 552 operates on the basis of add/sum of the charge signals (See [0256]); however, Watanabe fails to teach or suggest the use of “a plurality of binning transistors and wherein the plurality of binning transistors is configured to bin together at least one of the first portion or the second portion”, which in the same field of endeavor is taught by Fu. Fu teaches an image sensor with dynamic pixel binning (See Abstract). As for the image sensor, Fu teaches for example Fig 2B with photoconversion layer 240, FD 250, enabling transistors 254a,c and enabling transistors 254b,d which can disable bridges 250a,c while enabling brides 252b,d (See [0031]). Fig 2C, which includes the photosensitive layer 240, wherein the photodiodes are coupled to the FD 250 a – d by transistors 254a – d and the FD are bridged to each other by transistors 256a-d that are capable of binning any of the pixel rows or columns and to activate the bridges, the transistors need to be gated (See [0030; 0036]). It also teaches that the pixels can be coupled to an adjacent pixel by bridges in the same row or column (See [0041]). As for the event imaging sensor, Fu teaches in Fig 3 an image sensor with dynamic pixel binning with image sensor layer 310, pixel select 320, column sharing layer 330 and global event generator 340 or an analog signal comparator (See [0043; 0044]).
Therefore, it would have been obvious to the one with ordinary skill in the art before the filing date of the instant application to combine the various circuits taught by Watanabe and adding the binning of adjacent pixels in a row or column direction using bridges to couple the various floating diffusion nodes FDs as taught by Fu, to achieve predictable results, that allows a device that detect motion events to alternating/interleaving between binning configurations may be advantageously employed in some application such as motion detection, to more efficiently detect motion within two successive captured image frames (See Fu [0032]).
Regarding Claims 2 and 4:
The rejection of claim 1 is incorporated herein. As for the binning transistors include SNG transistor that horizontally bin together sense nodes on a row-by-row basis (Fig 2C, bridges 252a-c are implemented in part by transistors 256a-d represent the SNG transistors positioned between adjacent sense nodes as claim 2FDs FD250a and FD 250b or between FD250b and FD250c (See [0030; 0036]). It can also represent ALSEN transistors as in claim 4 which are positioned between two FDs as in Fig 2C (See [0030; 0036])
Regarding Claims 3 and 5:
The rejection of claim 1 is incorporated herein. Fu teaches that his image sensor is composed by rows and columns (See [0021; ]) and as discussed for claim 1, the binning between tow sense nodes or two floating diffusion nodes can be done by using bridges between two adjacent FDs by binning them via transistor 256 in Fig 2C and that Fig 3 includes a column select layer 330 that would select the individual binned pixels outputs by a column or a row (See [0043]). In this case the transistors 256 would represent the VNG transistors for adjacent pixel on the vertical or column direction positioned between two sense nodes or it could represent the FDI transistors position between two FDs.
Regarding Claim 9:
The rejection of claim 1 is incorporated herein. Claim 9 corresponds to the method steps for controlling the pixel circuitry disclosed for the imaging sensor of claim 1. In order to control an imaging sensor as the one disclosed in claim 1, it would have necessitated to operate the method steps as disclosed in claim 9. As for claim 9, Watanabe teaches in Fig 2 a controller circuit 10, which includes a read control circuit 100 and that controls the pixel array 40 via the vertical scanning circuit 20 and the horizontal scanning circuit 30 for reading normal pixel 51 and motion detection pixels 52, that include event pixels 52a (See [0052; 0053; 0054; 0055; 0056; 0119]). Additionally, Watanabe shows in Fig 9 and Fig 10 methods for operating an image sensor which includes motion and event pixel, wherein the system acquires event pixel signal and then detects the position where the motion is detected; then calculating the distribution of positions at which motion is detected. In step S103 it determines whether the distribution is larger than a threshold value and if is NOT, it ends the program; however, if the magnitude of the distribution is larger than the threshold value, it proceeds to determine the region according to the magnitude of the distribution (step S104) and then it reads the pixel signal within the region See [0132 – 0137; 0140 – 0146]). As for reading the normal pixels (See [0141]). However, Watanabe fails to teach the binning mode, which in the same field of endeavor is taught by Fu. In Fig 6, Fu teaches the enabling of the first binning configuration (step 610), wherein it reads the first pixel values (step 620). It enables the second binning configuration (630) and it reads the second pixel values (640). At this point, it compares the first and the second values for a pixel (650) and it outputs a binary indication (660). Then, it compares all the pixels (670) and it outputs the comparison results (See [0066]).
Therefore, it would have been obvious to the one with ordinary skill in the art before the filing date of the instant application to combine the various circuits taught by Watanabe and adding the binning of adjacent pixels in a row or column direction using bridges to couple the various floating diffusion nodes FDs as taught by Fu, that allows a device that detect motion events to alternating/interleaving between binning configurations may be advantageously employed in some application such as motion detection, to more efficiently detect motion within two successive captured image frames (See Fu [0032]).
Regarding Claim 10:
The rejection of claim 9 is incorporated herein. As for claim 10 limitations, Fu teaches that the bridges between adjacent pixel in a row can be disabled, which means that the pixels are not binned (See [0031]).
Regarding Claims 11 and 12:
The rejection of claims 1, 2, 3 and 9 is incorporated herein. Claims 11 and 12 have a similar scope as claims 2 and 3 but as applied to claim 9 instead. Therefore, claims 11 and 12 are rejected under the same rationale as claim 2 ads 3 above.
Regarding Claim 13:
The rejection of claims 1, 4 and 9 is incorporated herein. Claim 13 has a similar scope to claim 4 but as applied to claim 9 instead. Therefore, claim 13 is rejected under the same rationale as claim 4 above.
Regarding Claim 19:
The rejection of claim 1 is incorporated herein. Claim 19 pertains to an electronic device with all the limitations as disclosed in claim 1. As for electronic device, Watanabe Fig 1 teaches an imaging device 100, with a pixel array 40 with a plurality of intensity pixels or normal pixels 51 and motion detecting pixels 52, which include event detection pixel 52a. pixel circuit for normal pixels is shown in Fig 3A and circuit for motion detection pixels is shown in Fig 3B. Fig 4 shows the details of the image array with regions including normal pixels 51 and motion pixels 52 with the event pixel 52a. Fig 16 shows normal or normal-use pixels 541 with intensity photodiodes PD12a and PD12b with FDs FD1a and FD1b (See [0242;0243; 0244]). Fig 16, includes one motion detection multi-use pixel 552 that shares photodiode PD12 and it has a configuration in which each of the amplifiers 521, the bias transistor 522 and the capacitor 5231, provided in the switched capacitor amplified circuit 523 that are provided in the motion detection multi-use pixel 54 corresponding to each shared photodiode PD12. Fig 16 also includes the address event representation circuit AER 526 (See [0246; 0247; 0248; 0249; 0256]),
Even though, Watanabe teaches several transistors as part of the pixel circuitry and that the motion detection multi-use detection pixel 552 operates on the basis of add/sum of the charge signals (See [0256]); however, Watanabe fails to teach or suggest the use of “a plurality of binning transistors and wherein the plurality of binning transistors is configured to bin together at least one of the first portion or the second portion”, which in the same field of endeavor is taught by Fu. Fu teaches an image sensor with dynamic pixel binning (See Abstract). As for the image sensor, Fu teaches for example Fig 2B with photoconversion layer 240, FD 250, enabling transistors 254a,c and enabling transistors 254b,d which can disable bridges 250a,c while enabling brides 252b,d (See [0031]). Fig 2C, which includes the photosensitive layer 240, wherein the photodiodes are coupled to the FD 250 a – d by transistors 254a – d and the FD are bridged to each other by transistors 256a-d that are capable of binning any of the pixel rows or columns and to activate the bridges, the transistors need to be gated (See [0030; 0036]). It also teaches that the pixels can be coupled to an adjacent pixel by bridges in the same row or column (See [0041]). As for the event imaging sensor, Fu teaches in Fig 3 an image sensor with dynamic pixel binning with image sensor layer 310, pixel select 320, column sharing layer 330 and global event generator 340 or an analog signal comparator (See [0043; 0044]).
Therefore, it would have been obvious to the one with ordinary skill in the art before the filing date of the instant application to combine the various circuits taught by Watanabe and adding the binning of adjacent pixels in a row or column direction using bridges to couple the various floating diffusion nodes FDs as taught by Fu, to achieve predictable results, that allows a device that detect motion events to alternating/interleaving between binning configurations may be advantageously employed in some application such as motion detection, to more efficiently detect motion within two successive captured image frames (See Fu [0032]).
Regarding Claim 20:
The rejection of claims 19 and 2 is incorporated herein. Claim 20 has the same scope as claim 2 but as applied to claim 19 instead. Therefore, claim 20 is rejected under the same rationale as claim 2.
Allowable Subject Matter
Claims 6-8 and 14-18 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US 2024/0015412, discloses an imaging sensor with a DVS that performs pixel summing. US 2021/0152757, discloses an imaging device with a DVS chip and sums the pixel signals. US 2021/0067679, discloses an imaging sensor with an imaging sensor and an event sensor.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to GEVELL V SELBY whose telephone number is (571)272-7369. The examiner can normally be reached Monday-Thursday 6 AM - 3:30 PM; Friday 6-10 AM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Lin Ye can be reached at 571-272-7372. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/GEVELL V SELBY/Primary Examiner, Art Unit 2638
gvs