DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 6, 8, and 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kubendran et al US Publication No. 2020/0029031 in view of Wang et al US Publication No. 2016/0093273.
Regarding claim 1 Kubendran et al discloses of Fig. 1 – 5 of applicant’s a solid state imaging device (paragraph 0022 sensor chip 100 includes a pixel array 102 formed thereon and a peripheral circuit assembly integrated thereon is a solid state imaging device), comprising: a pixel array comprising a plurality of imaging pixels each of which being capable to detect as a positive polarity event a rise of intensity of light falling on the imaging pixel which rise is larger than a respective first predetermined threshold or as a negative polarity event a fall of the intensity which fall is larger than a respective second predetermined threshold (paragraph 0025 the circuit of a unit pixel 202 within array 102 includes a photodiode 204 that converts light 140 impinging on it into photocurrent corresponding to the amount of received light. Paragraph 0027 in the query driven readout mode, events 322 are readout as 2 bits, depending on whether there was an increase, decrease, or no change in pixel intensity. Any pixel in a selected row can raise an event depending on whether the intensity of light impinging on the pixel has decreased or increased, compared to the previous level of intensity at that same pixel. An event is determined by using a two-level voltage test input as VREF 214, i.e., VUP and VDN. The positive and negative level-crossing thresholds for change detection are user-defined parameters in the system and can be dynamically adapted based on the level of activity detected. For example, if no change events are detected, the threshold(s) should be decreased. On the other hand, if a flood of events come in, the threshold should be increased such that a pixel array 102 comprising a plurality of imaging pixels 202 each of which being capable to detect as a positive polarity event a rise of intensity of light falling on the imaging pixel 202 which rise increase is larger than a respective first predetermined positive level-crossing threshold or as a negative polarity event a fall decrease of the intensity which fall is larger than a respective second predetermined negative level-crossing threshold);
Kubendran et al further discloses of applicant’s and a control unit that is configured to receive a time series of the events of both polarities detected in the pixel array, to deduce from the time series of events information on the absolute light intensity received from objects whose movements caused the events, and to reconstruct a time series of images of the objects (paragraph 0007 the image sensor along with a controller in a feedback loop can demonstrate activity-based event streaming to monitor pixel activity and modulate clock frequency accordingly to reduce data rate and power consumption even further. Paragraph 0024 Three external clocks 101, 103 and 107 trigger gray counters whose output address codes are used to traverse the rows and columns and to read out pixel density data. Row decoder 116 selects one row at a time, during which all the columns are queried one after another. Depending on the mode of operation, either the absolute pixel intensity or the relative temporal contrast is read out from each pixel per column. Paragraph 0027 in the query driven readout mode, events 322 are readout as 2 bits, depending on whether there was an increase, decrease, or no change in pixel intensity. Any pixel in a selected row can raise an event depending on whether the intensity of light impinging on the pixel has decreased or increased, compared to the previous level of intensity at that same pixel. An event is determined by using a two-level voltage test input as VREF 214, i.e., VUP and VDN. The positive and negative level-crossing thresholds for change detection are user-defined parameters in the system and can be dynamically adapted based on the level of activity detected. For example, if no change events are detected, the threshold(s) should be decreased. On the other hand, if a flood of events come in, the threshold should be increased such that a controller is a control unit that is configured to receive a time series of the events of both positive and negative polarities detected in the pixel array 102, to deduce from the time series of events information on the absolute light intensity received);
Kubendran et al discloses an imager read method that detects a positive increase change in pixel intensity or a negative decrease change in pixel intensity but does not expressively disclose light intensity received from objects whose movements caused the events and to reconstruct a time series of images of the objects;
Wang et al teaches a method of a Dynamic Vision Sensor to capture moving objects per the pixel-level changes caused by the movement in a scene. Wang et al teaches of Fig. 1 – 12 of applicant’s light intensity received from objects whose movements caused the events and to reconstruct a time series of images of the objects (paragraph 0068 processor 167 is configured to execute the program code and operative to receive and process pixel event signals from the DVS 50 and detect the motion in a scene being sensed by the DVS 50. The digital control module 54 in the DVS 50 performs some of the processing of pixel event signals before the pixel output data are sent to the processor 167 for further processing and motion detection/display. Paragraph 0030 Event-Driven Dynamic Vision Sensor (DVS) is an imaging sensor that only detects motion in a scene. A DVS contains an array of pixels where each pixel computes relative changes of light or “temporal contrast.” Each pixel then outputs an Address Event (AE) (or, simply, an “event”) when local relative intensity changes exceed a global threshold. In a DVS only the local pixel-level changes caused by the movement in a scene are transmitted at the time they occur. Thus, the output of a DVS consists of a continuous flow of pixel events that represent the moving objects in the scene such that the light intensity received from moving objects whose movements caused the events and to reconstruct a time series of images of the objects to be displayed).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to modify the circuitry of Kubendran et al in a manner similar to Wang et al. Doing so would result improving Kubendran et al invention in a similar way as Wang et al – namely the ability to provide a method of a Dynamic Vision Sensor to capture moving objects per the pixel-level changes caused by the movement in a scene, in Wang et al invention, to the imager read method that detects a positive increase change in pixel intensity or a negative decrease change in pixel intensity in Kubendran et al invention.
Regarding claim 6 the combination of Kubendran et al in view of Wang et al teaches of applicant’s wherein the control unit is configured to deduce the information on the absolute light intensity based on a machine learning algorithm (Kubendran et al in paragraph 0007 the image sensor along with a controller in a feedback loop can demonstrate activity-based event streaming to monitor pixel activity and modulate clock frequency accordingly to reduce data rate and power consumption even further. Wang et al in paragraph 0068 processor 167 is configured to execute the program code and operative to receive and process pixel event signals from the DVS 50 and detect the motion in a scene being sensed by the DVS 50. Paragraph 0055 in machine learning, support vector machines are supervised learning models, which, along with associated learning algorithms, are used to analyze data and recognize patterns using linear classification such that the control unit is configured to deduce the information on the absolute light intensity based on a machine learning algorithm).
Regarding claim 8 of applicant’s wherein the control unit is configured to reconstruct the time series of images based on a machine learning algorithm. Claim 8 is rejected for the reasons found in rejected claims 1 and 6 above.
Regarding claim 10, claim 10 is rejected for being fully encompassed by the reasons found in rejected claim 1 above.
Claim(s) 5 and 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kubendran et al US Publication No. 2020/0029031 in view of Wang et al US Publication No. 2016/0093273 as applied to claim 1 above, and further in view of Berner et al US Publication No. 2019/0289230.
Regarding claim 5 the combination of Kubendran et al in view of Wang et al teaches of applicant’s wherein the control unit is configured to determine from the time series of events with which the objects move and the relative amount of change of the received intensity for each of the events; and the control unit is configured to reconstruct the time series of images, the determined relative amount of change of the received intensity, and the deduced information on the absolute light intensity; and the control unit is configured to reconstruct the time series of images based on what has been determined (Kubendran et al in paragraph 0007 the image sensor along with a controller in a feedback loop can demonstrate activity-based event streaming to monitor pixel activity and modulate clock frequency accordingly to reduce data rate and power consumption even further. Paragraph 0027 in the query driven readout mode, events 322 are readout as 2 bits, depending on whether there was an increase, decrease, or no change in pixel intensity. Any pixel in a selected row can raise an event depending on whether the intensity of light impinging on the pixel has decreased or increased, compared to the previous level of intensity at that same pixel. An event is determined by using a two-level voltage test input as VREF 214, i.e., VUP and VDN. The positive and negative level-crossing thresholds for change detection are user-defined parameters in the system and can be dynamically adapted based on the level of activity detected;
Wang et al in paragraph 0068 processor 167 is configured to execute the program code and operative to receive and process pixel event signals from the DVS 50 and detect the motion in a scene being sensed by the DVS 50. The digital control module 54 in the DVS 50 performs some of the processing of pixel event signals before the pixel output data are sent to the processor 167 for further processing and motion detection/display. Paragraph 0030 Event-Driven Dynamic Vision Sensor (DVS) is an imaging sensor that only detects motion in a scene. A DVS contains an array of pixels where each pixel computes relative changes of light or “temporal contrast.” Each pixel then outputs an Address Event (AE) (or, simply, an “event”) when local relative intensity changes exceed a global threshold. In a DVS only the local pixel-level changes caused by the movement in a scene are transmitted at the time they occur. Thus, the output of a DVS consists of a continuous flow of pixel events that represent the moving objects in the scene such that the control unit is configured to determine from the time series of events with which the moving objects move and the relative amount of increasing or decreasing change of the received intensity for each of the events; and the control unit is configured to reconstruct the time series of images for display, the determined relative amount of increasing or decreasing change of the received intensity, and the deduced information on the absolute light intensity and the control unit is configured to reconstruct the time series of images based on what has been determined in the pixel output data that is sent to the processor for further processing and motion detection/display);
The combination of Kubendran et al in view of Wang et al teaches an imager read method that detects a positive increase change in pixel intensity or a negative decrease change in pixel intensity and method of a Dynamic Vision Sensor processor to capture moving objects per the pixel-level changes caused by the movement in a scene and to further process the data of the capture moving objects but do not expressively teach the speed with which the objects move; based on the determined speed;
Berner et al teaches a method of a DVS sensor estimating the speed of an object per the collected information detected by the DVS sensor. Berner et al teaches of Fig. 1 – 10 of applicant’s the speed with which the objects move; based on the determined speed (paragraph 0018 of a DVS sensor, (paragraph 0063) the processing unit includes collecting the information detected by a sensor, i.e., about the light that hit the sensor, and performing calculation or manipulation on it in order to extract further information, not directly captured by the sensor, but available after this elaboration. In one specific example, the elaboration involves estimating the speed of an object or recognition of an object based on information collected by the light hitting an image sensor such that the speed with which the objects move is determined and estimating the speed of an object is based on the determined speed). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to modify the circuitry of Kubendran et al in a manner similar to Wang et al. Doing so would result improving Kubendran et al invention in a similar way as Wang et al – namely the ability to provide a method of a DVS sensor estimating the speed of an object per the collected information detected by the DVS sensor, in Berner et al invention, to the imager read method that detects a positive increase change in pixel intensity or a negative decrease change in pixel intensity in Kubendran et al invention and to further process the data of the capture moving objects in Wang et al invention where the processed image data for display, in Wang et al invention, includes estimating the speed of an object per the collected information detected by the DVS sensor in Berner et al invention.
Regarding claim 7 the combination of Kubendran et al in view of Wang et al further in view of Berner et al teaches of applicant’s wherein the control unit is configured to determine each of the speed with which the objects move and the relative amount of change of the received intensity based on a machine learning algorithm (Kubendran et al in paragraph 0007 the image sensor along with a controller in a feedback loop can demonstrate activity-based event streaming to monitor pixel activity and modulate clock frequency accordingly to reduce data rate and power consumption even further. Wang et al in paragraph 0068 processor 167 is configured to execute the program code and operative to receive and process pixel event signals from the DVS 50 and detect the motion in a scene being sensed by the DVS 50. Paragraph 0055 in machine learning, support vector machines are supervised learning models, which, along with associated learning algorithms, are used to analyze data and recognize patterns using linear classification. Berner et al in paragraph 0063 the processing unit includes collecting the information detected by a sensor, i.e., about the light that hit the sensor, and performing calculation or manipulation on it in order to extract further information, not directly captured by the sensor, but available after this elaboration. In one specific example, the elaboration involves estimating the speed of an object or recognition of an object based on information collected by the light hitting an image sensor such that the control unit is configured to determine each of the moving speed with which the objects move and the relative amount of positive or negative change of the received intensity based on a machine learning algorithm).
Allowable Subject Matter
Claims 2 – 4 and 9 are objected to as being dependent upon or ultimately dependent on a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MARK T MONK whose telephone number is (571)270-7454. The examiner can normally be reached Monday thru Friday 8am to 4pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sinh Tran can be reached at 571-272-7564. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MARK T MONK/Primary Examiner, Art Unit 2637