DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim [1] recites the limitation “the second process mode" in line [15]. There is insufficient antecedent basis for this limitation in the claim.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims [1 +7, 4-6 and 8-10] are rejected on the ground of nonstatutory double patenting as being unpatentable over claims [1+4, 2-3, 5-6 and 10] of US. PAT. No. [ 11, 350, 046]. Although the claims at issue are not identical, they are not patentably distinct from each other because . Claims [1+7, 4-6 and 8-10] of the current application are an obvious variant and encompassed by claims [1+4, 2- 3, 5-6 and 10] of US. PAT. No. [ 11, 350, 046]
Claims [1-3, 5-6 and 10] are rejected on the ground of nonstatutory double patenting as being unpatentable over claims [1-3, 5-6 and 10] of US. PAT. NO. [11,665,442] Although the claims at issue are not identical, they are not patentably distinct from each other because . Claims [1-3, 5-6 and 10] of the current application are an obvious variant and encompassed by claims [1-3, 5-6 and 10] of US. PAT. NO. [11,665,442].
Examiner note : in the above claim [ a + b] signifies the scope addition of the corresponding claims.
Below are the tables showing the conflicting claims
US. !8/963804
US. PAT. No. 11, 350, 046
1. A solid-state imaging device, comprising: an imager configured to output image data; processing circuitry configured to execute, in a first process mode, a first process using a neural network calculation model based on the outputted image data; and a controller configured to: readout the image data from the imager based on a selection of the first process mode; start the first process for the readout image data with the neural network calculation model after the readout of the image data from the imager is completed; and switch between the first process mode and the second process mode based on a result of the first process; and control execution of a second process that corresponds to the second process mode. 7. The solid-state imaging device according to claim 1, wherein the processing circuitry is further configured to execute the first process at a first frame rate, and the second process is executed at a second frame rate.
1. A solid-state imaging device, comprising: an imager configured to acquire image data; a processing unit configured to execute, in a first process mode, a first process using a neural network calculation model based on the acquired image data, wherein the first process is executed at a first frame rate; and a control unit configured to: control execution of a second process at a second frame rate, wherein the second process corresponds to a second process mode, and the second process is executed without the neural network calculation model; and switch between the first process mode and the second process mode based on a result of the first process.
4. The solid-state imaging device according to claim 2, wherein the control unit is further configured to: readout the image data from the imager based on a selection of the first process mode; and start the first process based on the neural network calculation model after the readout of the image data from the imager is completed.
4. The solid-state imaging device according to claim 1, wherein the processing circuitry is further configured to execute a computation process of detection of a certain detection target, and the detection of the certain detection target is based on the image data.
2. The solid-state imaging device according to claim 1, wherein the processing unit is further configured to execute a computation process of detection of a certain detection target, the detection of the certain detection target is based on the image data.
5. The solid-state imaging device according to claim 4, wherein the computation process is based on a pre-trained learning model.
3. The solid-state imaging device according to claim 2, wherein the computation process is based on a pre-trained learning model.
6. The solid-state imaging device according to claim 4, wherein the controller is further configured to start the readout of the image data from the imager after the computation process is completed, in a state in which the first process mode is selected.
5. The solid-state imaging device according to claim 2, wherein the control unit is further configured to start readout of the image data from the imager after the computation process is completed in a state in which the first process mode is selected.
8. The solid-state imaging device according to claim 7, wherein in a case where a certain detection target is detected in the first process mode, the second process is executed to readout the image data at the second frame rate higher than the first frame rate.
6. The solid-state imaging device according to claim 1, wherein in a case where a certain detection target is detected in the first process mode, the second process executes read out of the image data at the second frame rate without the execution of the first process that uses the neural network calculation model, and the second frame rate is higher than the first frame rate.
9. The solid-state imaging device according to claim 7, wherein in a case where a certain detection target is detected in the first process mode, the second process is executed at the second frame rate that is the same as the first frame rate.
10. The solid-state imaging device according to claim 9, wherein in a case where a certain detection target is detected in the first process mode, the second process is executed at the second frame rate based on the neural network calculation model, and the second frame rate is the same as the first frame rate.
10. An electronic device, comprising: a solid-state imaging device that includes: an imager configured to output image data; processing circuitry configured to execute, in a first process mode, a first process using a neural network calculation model based on the outputted image data; and a control unit configured to: readout the image data from the imager based on a selection of the first process mode; start the first process for the readout image data with the neural network calculation model after the readout of the image data from the imager is completed; switch between the first process mode and a second process mode based on a result of the first process; and control execution of a second process that corresponds to the second process mode. (the claim has substantially same limitation as claim [1] of 18/963,804).
1. A solid-state imaging device, comprising: an imager configured to acquire image data; a processing unit configured to execute, in a first process mode, a first process using a neural network calculation model based on the acquired image data, wherein the first process is executed at a first frame rate; and a control unit configured to: control execution of a second process at a second frame rate, wherein the second process corresponds to a second process mode, and the second process is executed without the neural network calculation model; and switch between the first process mode and the second process mode based on a result of the first process.
4. The solid-state imaging device according to claim 2, wherein the control unit is further configured to: readout the image data from the imager based on a selection of the first process mode; and start the first process based on the neural network calculation model after the readout of the image data from the imager is completed.
US. 18/963,804
US. PAT. NO. 11,665,442
1. A solid-state imaging device, comprising: an imager configured to output image data; processing circuitry configured to execute, in a first process mode, a first process using a neural network calculation model based on the outputted image data; and a controller configured to: readout the image data from the imager based on a selection of the first process mode; start the first process for the readout image data with the neural network calculation model after the readout of the image data from the imager is completed; and switch between the first process mode and the second process mode based on a result of the first process; and control execution of a second process that corresponds to the second process mode.
1. A solid-state imaging device, comprising: an imager configured to acquire image data; a processing unit configured to execute, in a first process mode, a first process using a neural network calculation model based on the acquired image data; and a control unit configured to: readout the image data from the imager based on a selection of the first process mode; start the first process for the readout image data with the neural network calculation model before the readout of the image data from the imager is completed; switch between the first process mode and a second process mode based on a result of the first process; and control execution of a second process that corresponds to the second process mode.
2. The solid-state imaging device according to claim 1, wherein the controller is further configured to start the first process based on a specific condition that is satisfied.
2. The solid-state imaging device according to claim 1, wherein the control unit is further configured to start, based on a specific condition that is satisfied, the first process for the readout image data before the readout of the image data from the imager is completed.
3. The solid-state imaging device according to claim 2, wherein the specific condition indicates that a quantity of light is equal to or larger than a preset threshold, and the image data corresponds to the quantity of the light.
3. The solid-state imaging device according to claim 2, wherein the specific condition indicates that a quantity of light is equal to or larger than a preset threshold, and the image data corresponds to the quantity of the light.
5. The solid-state imaging device according to claim 4, wherein the computation process is based on a pre-trained learning model.
5. The solid-state imaging device according to claim 4, wherein the computation process is based on a pre-trained learning model.
6. The solid-state imaging device according to claim 4, wherein the controller is further configured to start the readout of the image data from the imager after the computation process is completed, in a state in which the first process mode is selected.
6. The solid-state imaging device according to claim 4, wherein the control unit is further configured to start the readout of the image data from the imager after the computation process is completed, in a state in which the first process mode is selected.
10. An electronic device, comprising: a solid-state imaging device that includes: an imager configured to output image data; processing circuitry configured to execute, in a first process mode, a first process using a neural network calculation model based on the outputted image data; and a control unit configured to: readout the image data from the imager based on a selection of the first process mode; start the first process for the readout image data with the neural network calculation model after the readout of the image data from the imager is completed; switch between the first process mode and a second process mode based on a result of the first process; and control execution of a second process that corresponds to the second process mode.
10. An electronic device, comprising: a solid-state imaging device that includes: an imager configured to acquire image data; a processing unit configured to execute, in a first process mode, a first process using a neural network calculation model based on the acquired image data; and a control unit configured to: readout the image data from the imager based on a selection of the first process mode; start the first process for the readout image data with the neural network calculation model before the readout of the image data from the imager is completed; switch between the first process mode and a second process mode based on a result of the first process; and control execution of a second process that corresponds to the second process mode..
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims [1-2 4-11] are is/are rejected under 35 U.S.C. 103 as being unpatentable over Lin (2016/0373645).
Reclaim [1], Lin discloses a solid-state imaging device (see 1 fig. 2, pupil detection device) comprising: an imager configured to output image data (see 12 fig. 2, and ¶0026, The pupil detection device 1 includes an active light source 11, an image sensor 12 and a processing unit 13 ); processing circuitry configured to execute, in a first process mode (13 fig. 2, processing unit as depicted in fig. 2, when the processing unit 13 is configured to identify the pupil area as described in the text of paragraph 0032), a first process (identification process of pupil as described in paragraph 0032) 13 fig. 2, and ¶0032, when the processing unit 13 is configured to identify the pupil area, [the processing unit 13, also equated to the claimed controller by the virtue of performing the pupil identification as disclosed in paragraph 0032] ) 0032, s when the processing unit 13 is configured to identify the pupil area (e.g. a first mode), the image sensor 12 may capture image frames with a second resolution and a second frame rate, [image is read-out from the image sensor for identifying by the processing unit 13 as depicted in fig. 2]); start the first process for the readout image data 0032, the processing unit 13 is configured to identify the pupil area (e.g. a first mode), the image sensor 12 may capture image frames with a second resolution and a second frame rate, [ the image data is captured before the identification by the processor 13]); and switch between the first process mode and the second process mode based on a result of the first process (see ¶0032, when the processing unit 13 is configured to perform the iris recognition (e.g. a second mode), the image sensor 12 may capture image frames with a first resolution and a first frame rate, whereas when the processing unit 13 is configured to identify the pupil area (e.g. a first mode) [ when the processor 13 is alternating between recognition of iris (second mode] and the pupil identification (first mode)]) based on a result of the first process (after the pupil identification is completed as suggested in paragraph 0032); and control execution of a second process that corresponds to the second process mode (see 0032, when the processing unit 13 is configured to perform the iris recognition (e.g. a second mode), [when Iris recognition is required]).
Lin in the embodiment of fig. 2 discloses a processing unit (see 13 fig. 2). However Lin in the embodiment of fig. 2 doesn’t seem to explicitly disclose the processing unit configured to perform a process based on a neural network calculation model for both identifying and reading out for the image data acquired from the image sensor.
Nonetheless Lin in his embodiment directed to fig. 6 discloses a processor 63. Lin further discloses a process based on a neural network calculation model for both identifying and reading out for the image data acquired from the image sensor (see Lin .¶0060, The display controller 63 includes a face detection engine 631, an eye detection engine 633 and an eye protection engine 635. The face detection engine 631 and the eye detection engine 633 are machine learning engines which are trained to recognize a face and eyes respectively. For example, the face detection engine 631 and the eye detection engine 633 are implemented by an adaptive boosting or a convolution neural network, and trained before shipment of the image system 600 to respectively recognize the face and eyes).
Hence it would have been obvious to one of ordinary skill in the art to have been motivated to modify Lin’s embodiment directed to fig. 2 before the effective filling date of the claimed invention by his embodiment directed to fig. 6, for example by incorporating the display controller 63 of fig. 6 in the device of fig.2 of Lin since this would facilitate the detection of Iris and Pupil or enhance usability.
Reclaim [2], Lin as modified further discloses, wherein the controller is further configured to start the first process based on a specific condition that is satisfied. (see Lin ¶ 0032, when the processing unit 13 is configured to identify the pupil area).
Reclaim [4], Lin as modified further discloses, , wherein the processing circuitry is further configured to execute a computation process of detection of a certain detection target, and the detection of the certain detection target is based on the image data. (see Lin ¶0032, the processing unit 13 is configured to identify the pupil area (e.g. a first mode), the image sensor 12 may capture image frames with a second resolution and a second frame rate).
Reclaim [5], Lin as modified further discloses, wherein the computation process is based on a pre-trained learning model (see Lin ¶0060, The face detection engine 631 and the eye detection engine 633 are machine learning engines which are trained to recognize a face and eyes respectively).
Reclaim [6], Lin as modified further discloses wherein the controller is further configured to start the readout of the image data from the imager after the computation process is completed, in a state in which the first process mode is selected ( see Lin ¶0032, the processing unit 13 is configured to identify the pupil area (e.g. a first mode), the image sensor 12 may capture image frames with a second resolution and a second frame rate [ images captured and read-out for the processing unit to identify a region in the image]) .
Reclaim [7], Lin as modified further discloses wherein the processing circuitry is further configured to execute the first process at a first frame rate, and the second process is executed at a second frame rate (see Lin ¶0032, when the processing unit 13 is configured to perform the iris recognition (e.g. a second mode), the image sensor 12 may capture image frames with a first resolution and a first frame rate, whereas when the processing unit 13 is configured to identify the pupil area (e.g. a first mode), the image sensor 12 may capture image frames with a second resolution and a second frame rate).
Reclaim [8], Lin as modified further discloses wherein in a case where a certain detection target is detected in the first process mode, the second process is executed to readout the image data at the second frame rate higher than the first frame rate. (see Lin ¶0039, whereas the first frame rate may be lower than the second frame rate).
Reclaim [9], Lin as modified further discloses, wherein in a case where a certain detection target is detected in the first process mode, the second process is executed at the second frame rate that is the same as the first frame rate (see Lin ¶0032, adjustable range of the frame rate may be between 30 FPS and 480 FPS (frame/second), but the present disclosure is not limited thereto, [conditionally the first and second frame rates can be adjust to be equal as implied in paragraph 0032]).
Reclaim [10], except a few changes in wording has substantially same limitation as claim [1], and thus analyzed and rejected by the same reasoning.
Reclaim [11], Lin as modified further discloses, wherein the controller is further configured to dynamically adjust a frame rate between the first process mode and the second process mode, wherein the first process mode operates at a lower frame rate than the second process mode, thereby reducing power consumption during the execution of the neural network calculation model (see Lin ¶ 0032, wherein the first resolution may be higher than the second resolution and the first frame rate may be lower than the second frame rate. In this embodiment, an adjustable range of the image resolution may be between 640×480 and 160×120, and an adjustable range of the frame rate may be between 30 FPS and 480 FPS (frame/second), but the present disclosure is not limited thereto, [the frame rats can be automatically controlled based on the configured function of the processor and based on the frame rate power can be saved , for example operating at 60FPS may saves power than operating at 30 FPS, since more frames are captured at same time when operating at 60FPS than 30FPS).
Examiner note claim [3] is rejected under double patenting only.
Allowable Subject Matter
Claims[12-13] objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter:
Reclaim[12], none of the prior arts on the record either singularly or in combination teaches or suggests: a pixel array; an analog-to-digital converter (ADC); and a signal processor, wherein the ADC, the signal processor, and the processing circuitry are arranged in a layout in order from an upstream side along a flow of a signal read out from the pixel array, wherein the layout reduces power consumption.
Claim[13] is allowed due to its dependency on claim [12].
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to AHMED A BERHAN whose telephone number is (571)270-5094. The examiner can normally be reached 9:00Am-5:00pm (MAX- Flex).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Twyler Haskins can be reached at 571-272-7406. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/AHMED A BERHAN/Primary Examiner, Art Unit 2639