Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim 1-2 and 6-10 are rejected under 35 U.S.C. 103 as being unpatentable over Cheol (Korea Patent Pub. No.: KR 10-1771146 B1), hereinafter Cheol, in view of Tomoko (Japan Patent Pub. No.: JP2000266539A), hereinafter Tomoko, further in view of Masayuki (PCT Patent Pub. No.: WO2016129403A1), hereinafter Masayuki.
Regarding claim 1, Cheol teaches an image processing apparatus comprising: a distance histogram generator configured to generate (The object candidate detection unit 110 may include a disparity image generation unit 111 and a histogram analysis unit 112 to detect object candidates. Page 3 6th paragraph), for each horizontal angle of view (In addition, the object candidate detection step may detect an object candidate by analyzing the uniformity of the vertical distribution of the histogram of the parallax image. Page 3 2nd paragraph) of an imaging unit (First, the stereo camera 200 may include a first camera 210 and a second camera 220 to generate a stereo image. Page 3 4th paragraph) mounted on a vehicle (More particularly, the present invention relates to a method and an apparatus for detecting a pedestrian, a vehicle, and the like using an image acquired through a stereo camera installed on a moving object such as a vehicle. Page 2 4th paragraph), a distance histogram as one-dimensional distance data (3 illustrates a depth image and a histogram distribution diagram used in the object candidate detection step according to an exemplary embodiment of the present invention. The horizontal axes of the histograms 310 and 320 in FIG. 3 denote the depth values of the pixels of the depth image, and the vertical axes denote the number of pixels of the depth value. Page 4 1st paragraph .
PNG
media_image1.png
494
784
media_image1.png
Greyscale
), based on distance data of a distance image captured by the imaging unit (Fig. 3 depth map), the distance image having a pixel value corresponding to a distance to a crossing object in captured images of a region in front of the vehicle (The parallax image generating unit 111 may convert the parallax image generated using camera parameters or the like into a depth image. For example, the depth image may be an image in which the distance from the camera to the object is expressed as a value from 0 to 255. Page 3 7th paragraph).
Cheol does not teach the following limitations as further recited, but Tomoko further teaches a computation processor configured to compute edge histogram data (Fig. 8(C) is a histogram of each horizontal edge in the windows (1) to (5) in (B).
PNG
media_image2.png
536
626
media_image2.png
Greyscale
) for each distance histogram generated by the distance histogram generator (The horizontal positions defining these vertically long windows may be within the range xl to xr in which the preceding vehicle detected from the distance image in FIG. 6 is captured. [0039].
PNG
media_image3.png
854
530
media_image3.png
Greyscale
); a memory (Reference numeral 9 denotes a calculation unit, which is composed of a microcomputer including, for example, a CPU, RAM, ROM, and the like. [0077]) configured to hold at least the distance histogram generated by the distance histogram generator and the edge histogram data computed by the computation processor in association with each other (Next, an embodiment will be described in which the method described above is used to detect edges on a preceding vehicle being followed and to determine the rate of change of the inter-vehicle distance, thereby confirming the inter-vehicle distance and improving its accuracy. [0059]. In other words, the distance histogram and the edge histogram data are held in association with each other.).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Cheol to incorporate the teachings of Tomoko to compute edge histogram data for each distance histogram generated by the distance histogram generator and hold the distance histogram generated by the distance histogram generator and the edge histogram data computed by the computation processor in association with each other in order to improve the reliability of measuring the distance and changes in distance between a vehicle ahead that is being followed.
The combination of Cheol and Tomoko does not teach the following limitations as further recited, but Masayuki further teaches an extractor configured to extract (The pedestrian detection unit 300 extracts three-dimensional objects using this parallax image, tracks the extracted three-dimensional object candidates in chronological order, and, when three-dimensional object candidates are stably extracted in chronological order, identifies whether the parallax shape and the outline shape based on the edges extracted from the current image are likely to be those of a pedestrian. [0014]), from the edge histogram data computed in a past by the computation processor (This will be explained with reference to the upper diagram in FIG. If the position of the pedestrian can be acquired in time series from two frames before, it is assumed that the position of the pedestrian from T-2 [frame] to the current T [frame] can be acquired. [0022]. Tomoko teaches an edge histogram can be generated from distance image data (see rejection for claim 1 above).), any edge histogram data (The pedestrian detection unit 300 extracts three-dimensional objects using this parallax image, tracks the extracted three-dimensional object candidates in chronological order, and, when three-dimensional object candidates are stably extracted in chronological order, identifies whether the parallax shape and the outline shape based on the edges extracted from the current image are likely to be those of a pedestrian. [0014]) that matches with the edge histogram data computed latest by the computation processor (Taking into account the behavior of the vehicle, tracking of three-dimensional objects is performed by comparing the position and size of the three-dimensional object on the image predicted from the previous frame to the current frame, whether the predicted three-dimensional object is in a similar position, size, and disparity value within a certain threshold, and whether the position on the image of the previous frame is similar to the position on the image of the current frame. [0016]); a movement amount calculator configured to calculate an amount of movement of the crossing object (In step S10, the movement information prediction unit 530 predicts the pedestrian's destination using the pedestrian's position information and position accuracy information. [0077]) in a direction of width of the vehicle (
PNG
media_image4.png
678
620
media_image4.png
Greyscale
), based on a difference value between the extracted edge histogram data computed in the past by the computation processor and the edge histogram data computed latest by the computation processor (For this reason, movement prediction is performed using the instantaneous value of the position accuracy information acquired by the position accuracy information generating unit 400. As shown in the movement prediction (a) using the position accuracy information at the bottom right of FIG. 17, positions with poor accuracy using the position accuracy information are treated as excluded from the data for movement prediction. [0024].
PNG
media_image4.png
678
620
media_image4.png
Greyscale
); and an image processing control processor configured to execute image processing control (In step S02, the parallax image generating unit 200 performs stereo matching processing using the images captured by the imaging unit 100 of the stereo camera, and generates parallax images. [0070]).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Masayuki to extract, from the edge histogram data computed in a past, any edge histogram data that matches with the edge histogram data computed latest by the computation processor to calculate an amount of movement of the crossing object in a direction of width of the vehicle, based on a difference value between the extracted edge histogram data computed in the past and the edge histogram data computed latest by the computation processor in order to recognize the environment around the vehicle to prevent accidents before they happen.
Regarding claim 2, Tomoko in the combination teaches the image processing apparatus according to claim 1, wherein the computation processor is configured to compute the edge histogram data (Fig. 8(C) is a histogram of each horizontal edge in the windows (1) to (5) in (B).
PNG
media_image2.png
536
626
media_image2.png
Greyscale
) in a rectangular computation region (Fig. 8(B) windows (1) to (5)) that is set by the image processing control processor as a region in which the edge histogram data is to be computed (Fig. 8(C) is a histogram of each horizontal edge in the windows (1) to (5) in (B)), the rectangular computation region corresponding to a size (The horizontal positions defining these vertically long windows may be within the range xl to xr in which the preceding vehicle detected from the distance image in FIG. 6 is captured. [0039]) on a screen displaying the distance image (The horizontal positions defining these vertically long windows may be within the range xl to xr in which the preceding vehicle detected from the distance image in FIG. 6 is captured. [0039].
PNG
media_image5.png
854
530
media_image5.png
Greyscale
).
Regarding claim 6, Masayuki in the combination teaches the image processing apparatus according to claim 1, wherein the extractor is configured to define, as one or more matching targets, one or more pieces of the distance histogram data computed in the past, included in a search range set by the image processing control processor (This will be explained with reference to the upper diagram in FIG. If the position of the pedestrian can be acquired in time series from two frames before, it is assumed that the position of the pedestrian from T-2 [frame] to the current T [frame] can be acquired. [0022]. Cheol teaches a distance histogram can be generated from distance image data (see rejection for claim 1).), and each having a height difference within a first threshold from the distance histogram generated by the distance histogram generator (Taking into account the behavior of the vehicle, tracking of three-dimensional objects is performed by comparing the position and size of the three-dimensional object on the image predicted from the previous frame to the current frame, whether the predicted three-dimensional object is in a similar position, size (i.e., height), and disparity value (i.e., distance) within a certain threshold, and whether the position on the image of the previous frame is similar to the position on the image of the current frame. [0016]).
Regarding claim 7, Masayuki in the combination teaches the image processing apparatus according to claim 6, wherein the extractor is configured to define, as the one or more matching targets, one or more pieces of the edge histogram data computed in the past (The pedestrian detection unit 300 extracts three-dimensional objects using this parallax image, tracks the extracted three-dimensional object candidates in chronological order, and, when three-dimensional object candidates are stably extracted in chronological order, identifies whether the parallax shape and the outline shape based on the edges extracted from the current image are likely to be those of a pedestrian. [0014]. Tomoko teaches an edge histogram can be generated from distance image data (see rejection for claim 1).), included in the search range set by the image processing control processor (This will be explained with reference to the upper diagram in FIG. If the position of the pedestrian can be acquired in time series from two frames before, it is assumed that the position of the pedestrian from T-2 [frame] to the current T [frame] can be acquired. [0022]), and each exhibiting a change in speed within a second threshold (Taking into account the behavior of the vehicle, tracking of three-dimensional objects is performed by comparing the position and size of the three-dimensional object on the image predicted from the previous frame to the current frame, whether the predicted three-dimensional object is in a similar position, size, and disparity value within a certain threshold, and whether the position on the image of the previous frame is similar to the position on the image of the current frame. [0016]. In step S04, the tracking unit 320 uses at least two pieces of information from the three-dimensional object position in the current frame, the three-dimensional object candidate position in the previous frame, the three-dimensional object speed information, the vehicle behavior, etc., to track the three-dimensional object candidate extracted independently for each frame in the processing of step S03. [0072]. A person having ordinary skill in the art would recognize a second threshold can be established to set a search range for speed.) relative to the distance histogram generated by the distance histogram generator (The effective parallax histogram (i.e., the distance histogram) thus projected in the vertical direction of the image is used to calculate the degree of separation from surrounding objects. [0047]).
Regarding claim 8, Masayuki in the combination teaches the image processing apparatus according to claim 6, wherein the extractor is configured to determine a degree of closeness (Taking into account the behavior of the vehicle, tracking of three-dimensional objects is performed by comparing the position and size of the three-dimensional object on the image predicted from the previous frame to the current frame, whether the predicted three-dimensional object is in a similar position, size, and disparity value within a certain threshold, and whether the position on the image of the previous frame is similar to the position on the image of the current frame. [0016]) between the edge histogram data computed latest by the computation processor and each of the one more pieces of the edge histogram data defined as the one or more matching targets (This will be explained with reference to the upper diagram in FIG. If the position of the pedestrian can be acquired in time series from two frames before, it is assumed that the position of the pedestrian from T-2 [frame] to the current T [frame] can be acquired. [0022]. Tomoko teaches an edge histogram can be generated from distance image data (see rejection for claim 1).), and to extract any edge histogram data to be matched with the edge histogram data computed latest (In this way, by determining whether or not to use the data for predicting movement depending on the position accuracy information, data with poor position accuracy that is likely to have a large error (i.e., a degree of closeness) is excluded, and position data with a large error is excluded as shown in (a), and actual movement and prediction can be obtained with significantly less error. [0024]).
Regarding claim 9, Masayuki in the combination teaches the image processing apparatus according to claim 7, wherein the extractor is configured to determine a degree of closeness (Taking into account the behavior of the vehicle, tracking of three-dimensional objects is performed by comparing the position and size of the three-dimensional object on the image predicted from the previous frame to the current frame, whether the predicted three-dimensional object is in a similar position, size, and disparity value within a certain threshold (i.e., a degree of closeness), and whether the position on the image of the previous frame is similar to the position on the image of the current frame. [0016]) between the edge histogram data computed latest by the computation processor and each of the one more pieces of the edge histogram data defined as the one or more matching targets (This will be explained with reference to the upper diagram in FIG. If the position of the pedestrian can be acquired in time series from two frames before, it is assumed that the position of the pedestrian from T-2 [frame] to the current T [frame] can be acquired. [0022]. Tomoko teaches an edge histogram can be generated from distance image data (see rejection for claim 1).), and to extract any edge histogram data to be matched with the edge histogram data computed latest (In this way, by determining whether or not to use the data for predicting movement depending on the position accuracy information, data with poor position accuracy that is likely to have a large error is excluded, and position data with a large error is excluded as shown in (a), and actual movement and prediction can be obtained with significantly less error. [0024]).
Apparatus claim 10 is drawn to the apparatus as claimed in claim 1. Therefore apparatus claim 10 corresponds to apparatus claim 1, and is rejected for the same reasons of obviousness as used above.
Claims 3-4 are rejected under 35 U.S.C. 103 as being unpatentable over Cheol (Korea Patent Pub. No.: KR 10-1771146 B1), hereinafter Cheol, in view of Tomoko (Japan Patent Pub. No.: JP2000266539A), hereinafter Tomoko, further in view of Masayuki (PCT Patent Pub. No.: WO2016129403A1), hereinafter Masayuki, further in view of Kazutoshi (Japan Patent No.: JP3650205B2), hereinafter Kazutoshi.
Regarding claim 3, Tomoko teaches the image processing apparatus according to claim 2, wherein the rectangular computation region comprises a plurality of the rectangular computation regions (Fig. 8(B) windows (1) to (5)), and the computation processor is configured to compute the edge histogram data (Fig. 8(C) is a histogram of each horizontal edge in the windows (1) to (5) in (B).
PNG
media_image2.png
536
626
media_image2.png
Greyscale
) while causing the rectangular computation regions (In addition, in the explanation of Figure 8, a case has been described in which multiple windows for edge detection set on the preceding vehicle are vertically long, but the same principle can be applied even if horizontally long windows are used and the target edges on the preceding vehicle to be detected are vertical edges. [0058]) from a road surface on the screen to move to a predetermined height (When the distance Z to the preceding vehicle is known, and the height from the road surface to the camera is H and the height of the preceding vehicle is h, as shown in FIG. 7, the upper and lower ends (yu, yd) of the preceding vehicle are imaged at approximately the positions given by the following equation (2): Here, the lower end yd is considered to be at approximately the same height as the road surface. [0037]).
The combination of Cheol, Tomoko, and Masayuki does not teach the following limitations as further recited, but Kazutoshi further teaches while causing the rectangular computation regions to sequentially overlap each other (H is the height of the window, m is the point P1 that is only separated by the predetermined overlap rate (%) and the point P2 that is only H away from this point P1 in the direction of movement of window 43. [0021]. When the next window 43′ is determined, window 43 will be moved sequentially while sampling the edge points in the same window 43′ as mentioned above and fitting the approximate straight line. [0022].
PNG
media_image6.png
300
344
media_image6.png
Greyscale
).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Cheol, Tomoko and Masayuki to incorporate the teachings of Kazutoshi to cause the rectangular computation regions to sequentially overlap each other in order to flexibly perform edge tracking and extract the necessary measurement information.
Regarding claim 4, Masayuki in the combination teaches the image processing apparatus according to claim 3, wherein the computation processor is configured to refrain from executing any computation process in any of the rectangular computation regions that includes no height data in which the distance data is present in the distance histogram (The three-dimensional object extraction unit 310 extracts a group of parallaxes with similar parallax from the parallax image (i.e., the distance data is available) in the form of a rectangular frame, thereby extracting a division as a three-dimensional object that is thought to be one mass (i.e., the height data is available). [0015]. A person having ordinary skill in the art would recognize that height data is needed to determine an object’s position and the system should refrain from any computation if the rectangular computation regions include no height data.).
Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Cheol (Korea Patent Pub. No.: KR 10-1771146 B1), hereinafter Cheol, in view of Tomoko (Japan Patent Pub. No.: JP2000266539A), hereinafter Tomoko, further in view of Masayuki (PCT Patent Pub. No.: WO2016129403A1), hereinafter Masayuki, further in view of Kazutoshi (Japan Patent No.: JP3650205B2), hereinafter Kazutoshi, further in view of Masumi (Japan Patent Pub. No.: JP2014-182629A), hereinafter Masumi.
Regarding claim 5, Cheol, Tomoko, Masayuki and Kazutoshi teach all of the elements of the claimed invention as stated in claim 4 except for the following limitations as further recited. However, Masumi teaches wherein the computation processor is configured to compute a luminance gradient as the edge histogram data (The image feature amount is, for example, a global feature such as a color histogram or an edge histogram obtained from an image including an object, or a local feature amount such as SIFT (Scale Invariant Feature Transform ) based on a brightness gradient at a peripheral position of an image such as a corner point. [0015]).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Cheol, Tomoko, Masayuki and Kazutoshi to incorporate the teachings of Masumi to compute a luminance gradient as the edge histogram data in order to efficiently perform object detection.
Tomoko in the combination further teaches to determine an edge distribution (Fig. 8(C) is a histogram of each horizontal edge in the windows (1) to (5) in (B)
PNG
media_image2.png
536
626
media_image2.png
Greyscale
) for each distance histogram (The horizontal positions defining these vertically long windows may be within the range xl to xr in which the preceding vehicle detected from the distance image in FIG. 6 is captured. [0039].
PNG
media_image3.png
854
530
media_image3.png
Greyscale
).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to LEI ZHAO whose telephone number is (703)756-1922. The examiner can normally be reached Monday - Friday 8:00 am - 5:00 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, VU LE can be reached at (571)272-7332. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/LEI ZHAO/Examiner, Art Unit 2668
/VU LE/Supervisory Patent Examiner, Art Unit 2668