Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(B) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-25 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention.
Claim 1 recites “post-processing the output value from the neural network to identify a feature of the environment of a vehicle.” However, it is unclear what this claim limitation means as it is unclear what the difference is between “a vehicle” here and “a vehicle” in the preamble which all recited processing in all the claim limitations before the last claim limitation are for. The claim 1 is indefinite since the metes and bounds of the claim cannot be defined due to the lack of clarity for the claimed invention.
Claim 17 has similar issues as discussed above and all other claims depend on claims 1 and 17 and therefore claims 2-25 are rejected for the same reasons as for claim 1.
The dependent claims 14 and 25 also recite “a vehicle” and have similar issues as discussed above regarding claim 1.
The dependent claims 5, 15-16 and 25 recite “the vehicle”. There is insufficient antecedent basis for “the vehicle”.
Claims 6-8 depend on claim 5 and therefore claims 6-8 are rejected for the same reasons as for claim 5.
The claim 2 recites “wherein the feature is a road or driving surface, such that the method is for road segmentation”. It is unclear what this limitation means as it is inconsistent with the recited claim limitations of its parent claim, which recite “outputting an output value indicative of a classification result from the neural network; … post-processing the output value from the neural network to identify a feature of the environment of a vehicle.” It is unclear what it means by “such that the method is for road segmentation”.
The claim 3 recites “wherein the neural network has a continuous time recurrent neural network architecture and in particular is a low-resolution recurrent active vision neural network.” It is unclear what this limitation means as it is unclear what “a low-resolution recurrent active vision neural network” is. Furthermore, the terms “low” in “low-resolution” and “active” in “active vision” are a relative term which renders the claim indefinite. The terms “low” and “active” are not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention.
The claims 4 and 8 recite “the patches”. However, there is insufficient antecedent basis for “the patches”.
Claims 5-8 depend on claim 4 and therefore claims 5-8 are rejected for the same reasons as for claim 4.
The claim 5 recites “the patch” and “the plurality of patches”. However, there is insufficient antecedent basis for these claim limitations. Claims 6-8 depend on claim 5 and therefore claims 6-8 are rejected for the same reasons as for claim 5.
The claim 8 recites “the sub-patch”. However, there is insufficient antecedent basis for “the sub-patch”.
Claim 10 recites “wherein the colour transformation is a transformation into hue, saturation and green/magenta colour channels.” However, it is unclear what this claim limitation means as it is unclear what it means by “hue, saturation and green/magenta colour channels”. The “hue, saturation and green/magenta colour channels” are not commonly used color channels of a color space. In light of the disclosure, the color transformation is, in one example, transforming from a RGB color space. It is thus unclear what it means by transforming from a RGB color space, which is a standard color space with 3 color channels, into a color space with “hue, saturation and green/magenta colour channels”. To advance the prosecution, for the rest of this office action, the “hue, saturation and green/magenta colour channels” will be interpreted as the standard hue, saturation and lightness (HSL) colour channels or hue, saturation and value (HSV) colour channels.
Claim 12 recites “wherein the first n iterations are discounted from the calculation of the average output value”. It is unclear what this claim limitation means as it is unclear what it means by “discounted”. Does it mean not counted or counted less? If it means “counted less” implying a comparison, it is also unclear what it compares to.
Claim 17 recites “instructions stored thereon, which, when executed by a processor, cause the processor to act as a perception software stack” and “a third output for outputting an output value indicative of a classification result from the neural network; and a fourth layer configured to obtain and post-process an output value from the neural network to identify a feature of the environment of a vehicle.” It is unclear what it means by causing “the processor to act as a perception software stack”. Under BRI, a processor is a hardware whereas a software stack is a collection of software components which is still a software. It is also unclear what the difference is between two output values and thus it is unclear what is post-processed.
Claims 18-25 depend on claim 17 and therefore claims 18-25 are rejected for the same reasons as for claim 17.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 17-25 are rejected under 35 U.S.C. 101 because claimed invention is directed to non-statutory subject matter.
Claim 17 recites a “computer-readable medium”. However, the specification as originally filed does not explicitly define the computer-readable medium.
The United States Patent and Trademark Office (USPTO) is obliged to give claims their broadest reasonable interpretation consistent with the specification during proceedings before the USPTO. See In re Zletz, 893 F.2d 319 (Fed. Cir. 1989) (during patent examination the pending claims must be interpreted as broadly as their terms reasonably allow). The broadest reasonable interpretation of a claim drawn to a computer-readable medium typically covers forms of non-transitory tangible media and transitory propagating signals per se in view of the ordinary and customary meaning of computer readable media, particularly when the specification is absent an explicit definition or is silent. See MPEP 2111.01. When the broadest reasonable interpretation of a claim covers a signal per se, the claim must be rejected under 35 U.S.C. § 101 as covering non-statutory subject matter. See In re Nuijten, 500 F.3d 1346, 1356-57 (Fed. Cir. 2007) (transitory embodiments are not directed to statutory subject matter) and Interim Examination Instructions for Evaluating Subject Matter Eligibility Under 35 U.S.C. § 101, Aug. 24, 2009; p. 2.
Claims 18-25 depend on the claim 17 without any further limitation to make the claimed invention statutory and are therefore rejected for the same reasons as for the claim 17.
Claims 18-25 recite “perception software stack” which is a software and does not belong to a statutory category.
The examiner’s suggestion is to replace "computer-readable medium" with "non-transitory computer-readable medium" in the claim 17 to make the claimed invention statutory, and to replace “perception software stack” with "non-transitory computer-readable medium" in the claims 18-25 to make the claimed inventions statutory.
References Cited in Prior Art Rejections
The following references are cited in the prior art rejections set forth below and are referred to as noted:
Hotson et al., US 20180211128 A1, published on 2018-07-26, hereinafter Hotson, and
Toyama, US 20050008193 A1, published on 2005-01-13, hereinafter Toyama.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-25 are rejected under 35 U.S.C. 103 as being unpatentable over Hotson in view of Toyama.
Regarding claim 1, Hotson discloses a computer-implemented method for use in a vehicle for identifying a feature of the environment of the vehicle, (Hotson: Figs. 1-3, 5 and 7, [0016]) the method comprising:
receiving an original image from a sensor or camera; (Hotson: [0018-0020, 0029]. For example, point cloud from radar or LIDAR.)
pre-processing the original image to produce an input image; (Hotson: [0017-0020]. For example, processing point cloud data to obtain a depth map and registering the depth map with an RGB camera image.)
presenting the input image to a neural network; (Hotson: Fig. 2, [0017-0021, 0031])
wherein the neural network is trained to classify a feature in an image presented to it, the neural network having an input layer, a hidden layer and an output layer, the output layer including three outputs: (Hotson: 202-210 in Fig. 2, [0031].)
a first feedback output for selecting pixels from the input image to input at the input layer at each iteration of the neural network; (Hotson: Fig. 2, [0031-0032, 0035, 0037, 0040], Input nodes 202 represent input information for each pixel or pixels in an image [0031] and receive feedback for each pixel or the pixels from output nodes 210 [0035] based on processing results of previous images, thus effectively selecting pixels from the current input image, as illustrated in Figs. 4 and 6 for processing stages along time line.)
a third output for outputting an output value indicative of a classification result from the neural network; (Hotson: Fig. 2, [0031-0032, 0035, 0040]. “[0031] … At the end of the computation, the output nodes 210 yield values that correspond to the class inferred by the neural network.”)
obtaining the output value from the neural network; (Hotson: Fig. 2, [0031-0032, 0035, 0040, 0044], i.e., values corresponding to classes and location for each sub-region.) and
post-processing the output value from the neural network to identify a feature of the environment of a vehicle. (Hotson: Fig. 2, [0031-0032, 0035, 0040, 0044], a vehicle, a bicycle, a pedestrian, a curb or barrier, etc., is identified as a result.)
Hotson does not disclose explicitly a second feedback output for selecting a colour channel of the selected pixels to input at the input layer at each iteration.
But Toyama teaches, in an analogous art of video image processing involving object identification and tracking, a second feedback output for selecting a colour channel of the selected pixels to input at the input layer at each iteration. (Toyama: [0015, 0017, 0022, 0028-0031, 0056, 0067-0068, 0089, 0100, 0108], color based object tracking for image pixels through iterative feedback learning process.)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Hotson’s disclosure with Toyama’s teachings by combining the method for use in a vehicle for identifying a feature of the environment of the vehicle (from Hotson) with the technique of selecting a colour channel of pixels through iterative feedback learning process (from Toyama) to yield no more than predictable use of prior art elements according to their established functions since all the claimed elements, which are taught by prior art references, would continue to operate in the same manner, particularly, the method for use in a vehicle for identifying a feature of the environment of the vehicle would still work in the way according to Hotson and the technique of selecting a colour channel of pixels through iterative feedback learning process would continue to function as taught by Toyama. In fact, the inclusion of Toyama's technique would provide a practical and/or alternative implementation of the method from Hotson and thus would enable a better and more flexible method for use in a vehicle for identifying a feature of the environment of the vehicle due to the alternative implementation made available by Toyama’s technique.
Therefore, it would have been obvious to combine Hotson with Toyama to obtain the invention as specified in claim 1.
Regarding claim 2, Hotson {modified by Toyama} discloses the method of claim 1 wherein the feature is an object, such that the method is for object detection, or wherein the feature is a road or driving surface, such that the method is for road segmentation; or wherein the feature is present in the input image, such that the method is for image classification. (Hotson: [0024, 0032, 0035, 0040, 0044], a vehicle, a bicycle, a pedestrian, and/or a curb or barrier is detected.)
Regarding claim 3, Hotson {modified by Toyama} discloses the method of claim 1 wherein the neural network has a continuous time recurrent neural network architecture and in particular is a low-resolution recurrent active vision neural network. (Hotson: Figs. 4-5, [0025, 0028-0029, 0031, 0043-0049, 0051]. “Note that, by the Shannon sampling theorem, discrete time recurrent neural networks can be viewed as continuous-time recurrent neural networks”, according to Wikipedia (see page 6 of the attached reference of Wikipedia on RNN with this office action.) Furthermore, since Hotson’s disclosure is on real-time object detection using real-time sensor data for such applications as an autonomous vehicle, its RNN is interpreted as a recurrent active vision neural network. Also, since it is using sensor data from such sensors as LIDAR, radar, or cameras including infrared depth cameras, its RNN is interpreted as a low-resolution recurrent active vision neural network.)
Regarding claim 4, Hotson {modified by Toyama} discloses the method of claim 1 wherein the pre-processing includes splitting the original image into a plurality of smaller-sized patches, and presenting the input image to the neural network includes consecutively presenting the patches to the neural network. (Hotson: “[0038] … Thus, the image may be processed one sub-region at a time. For example, the window 302 represents a portion of the image 302 that may be fed to a neural network for object or feature detection. The window 302 may be slid to different locations to effectively process the whole image 302. For example, the window 302 may start in a corner and then be subsequently moved from point to point to detect features.”)
Regarding claim 5, Hotson {modified by Toyama} discloses the method of claim 4, wherein obtaining the output value from the neural network comprises obtaining an output value for each patch; (Hotson: Figs. 2-4, [0038, 0043-0047])
wherein post-processing the output value comprises post-processing the output value of each patch to produce a heat-map image, (Hotson: Figs. 2-4, [0031-0032, 0038, 0043-0044]. “[0043] … For each sub-region 410 (such as a location of the window 302 of FIG. 3), an object prediction is generated.” “[0044] The object predictions may indicate an object type, and/or an object location. For example, a ‘0’ value for the object prediction may indicate that there is no object, a ‘1’ may indicate that the object is a car, a ‘2’ may indicate that the object is a pedestrian, and so forth.” The claimed “heat-map image” is interpreted as the image in Fig. 3 with each window 302 assigned a value (i.e., 0, 1, 2, etc.) indicating an object type (no object, car, pedestrian, etc.)) wherein the heat-map image is formed by:
generating a second plurality of patches, wherein each of the second plurality of patches is paired with an individual patch in the plurality of patches; (Hotson: Figs. 2-4, [0038, 0043-0044]. Each of the second plurality of patches is the same as its paired patch from the (first) plurality of patches.)
filling each of the second plurality of patches with a singular pixel value based on the output value for the patch to which it is paired; (Hotson: Figs. 2-4, [0031-0032, 0035, 0038, 0043-0044]. Discussions regarding Fig. 2 applies to each sub-region since, as discussed above, the neural network in Fig. 2 is used to processing one sub-region at a time. The claimed “singular pixel value” is the output value of 0, 1, 2, etc., indicating an object type of no object, car, pedestrian, etc. [0044].) and
positioning each of the second plurality of patches in a heat-map image plane in the same relative position as the patch to which it is paired with respect to the image plane of the original image; (Hotson: Figs. 2-4, [0031-0032, 0035, 0038, 0043-0044]. This is implied since the sub-region 302 in Fig. 2 is processed one at a time using the neural network in Fig. 2 which produces an output value for each sub-region 302 in Fig. 2 with a value 0, 1, 2, etc., indicating an object type of no object, car, pedestrian, etc. [0044].)
and wherein post-processing further comprises applying Hotson: Figs. 3-5, [0038, 0049])
Hotson {modified by Toyama} does not disclose explicitly a segmentation or fitting algorithm for feature identification in an image, which is, however, well known and commonly practiced in the image processing art for object detection. Examiner takes an official notice to this fact.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Hotson {modified by Toyama}’s disclosure with the Official Notice’s teachings by combining the method for use in a vehicle for identifying a feature of the environment of the vehicle (from Hotson {modified by Toyama}) with the segmentation or fitting algorithm for feature identification in an image (from the Official Notice) to yield no more than predictable use of prior art elements according to their established functions since all the claimed elements, which are taught by prior art references, would continue to operate in the same manner, particularly, the method for use in a vehicle for identifying a feature of the environment of the vehicle would still work in the way according to Hotson {modified by Toyama} and the segmentation or fitting algorithm for feature identification in an image would continue to function as taught by the Official Notice. In fact, the inclusion of the Official Notice's technique would provide a practical and/or alternative implementation of the method from Hotson {modified by Toyama} and thus would enable a better and more flexible method for use in a vehicle for identifying a feature of the environment of the vehicle due to the practical implementation made available by the Official Notice’s technique.
Therefore, it would have been obvious to combine Hotson {modified by Toyama} with the Official Notice to obtain the invention as specified in claim 5.
Regarding claim 6, Hotson {modified by Toyama} discloses the method of claim 5 wherein each of the second plurality of patches is reduced to one pixel or a singular array entry before forming the heat-map image, such that the resolution of the heat-map image is less than the resolution of the original image. (Hotson: Figs. 2-4, [0031-0032, 0038, 0043-0044]. Each sub-region is effectively reduced to “one pixel” in the sense that “[0043] … For each sub-region 410 (such as a location of the window 302 of FIG. 3), an object prediction is generated.”)
Regarding claim 7, Hotson {modified by Toyama} discloses the method of claim 5 wherein during pre-processing the original image is split such that the each patch of the plurality of patches has an Hotson: Figs. 3-4, [0038-0040, 0043-0045].)
Hotson {modified by Toyama} does not disclose explicitly splitting an image into overlapping patches each having an region overlapping with neighbouring patches, which is, however, well known and commonly practiced in the image processing art. Examiner takes an official notice to this fact.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Hotson {modified by Toyama}’s disclosure with the Official Notice’s teachings by combining the method for use in a vehicle for identifying a feature of the environment of the vehicle (from Hotson {modified by Toyama}) with the technique of splitting an image into overlapping patches each having an region overlapping with neighbouring patches (from the Official Notice) to yield no more than predictable use of prior art elements according to their established functions since all the claimed elements, which are taught by prior art references, would continue to operate in the same manner, particularly, the method for use in a vehicle for identifying a feature of the environment of the vehicle would still work in the way according to Hotson {modified by Toyama} and the technique of splitting an image into overlapping patches each having an region overlapping with neighbouring patches would continue to function as taught by the Official Notice. In fact, the inclusion of the Official Notice's technique would provide a practical and/or alternative implementation of the method from Hotson {modified by Toyama} and thus would enable a better and more flexible method for use in a vehicle for identifying a feature of the environment of the vehicle due to the practical implementation made available by the Official Notice’s technique.
Therefore, it would have been obvious to combine Hotson {modified by Toyama} with the Official Notice to obtain the invention as specified in claim 8.
Regarding claim 8, Hotson {modified by Toyama} discloses the method of claim 7, wherein, when generating the heat-map image, each of the second plurality of patches are formed as sub-patches that are Hotson: Figs. 2-4, [0038, 0043-0044]. Each sub-patch is the same as its paired patch from the (first) plurality of patches and thus is paired with the entire portion of its paired patch.) and wherein:
if the sub-patch is paired to a portion of a patch that is an overlapping region, the method further comprises filling the sub-patch with a singular pixel value based on the output values for the patch to which the portion belongs and the neighbouring patches that share the overlapping region; or
if the sub-patch is paired to a portion of a patch that is not an overlapping region, the method further comprises filling the sub-patch with a singular pixel value based on the output value for the patch to which the portion belongs.. (Hotson: Figs. 2-4, [0031-0032, 0038, 0043-0044].)
Hotson {modified by Toyama} does not disclose explicitly further splitting a patch into sub-patches and determining the value of a sub-patch in an overlapping region based on the output values of patches sharing the overlapping region, which is, however, well known and commonly practiced in the image processing art for object detection. Examiner takes an official notice to this fact.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Hotson {modified by Toyama}’s disclosure with the Official Notice’s teachings by combining the method for use in a vehicle for identifying a feature of the environment of the vehicle (from Hotson {modified by Toyama}) with the technique of further splitting a patch into sub-patches and determining the value of a sub-patch in an overlapping region based on the output values of patches sharing the overlapping region (from the Official Notice) to yield no more than predictable use of prior art elements according to their established functions since all the claimed elements, which are taught by prior art references, would continue to operate in the same manner, particularly, the method for use in a vehicle for identifying a feature of the environment of the vehicle would still work in the way according to Hotson {modified by Toyama} and the technique of further splitting a patch into sub-patches and determining the value of a sub-patch in an overlapping region based on the output values of patches sharing the overlapping region would continue to function as taught by the Official Notice. In fact, the inclusion of the Official Notice's technique would provide a practical and/or alternative implementation of the method from Hotson {modified by Toyama} and thus would enable a better and more flexible method for use in a vehicle for identifying a feature of the environment of the vehicle due to the practical implementation made available by the Official Notice’s technique.
Therefore, it would have been obvious to combine Hotson {modified by Toyama} with the Official Notice to obtain the invention as specified in claim 8.
Regarding claim 9, Hotson {modified by Toyama} discloses the method of claim 1 wherein the pre-processing includes performing a Hotson: [0017-0020]. For example, processing point cloud data to obtain a depth map and registering the depth map with an RGB camera image.)
Regarding claim 10, Hotson {modified by Toyama} discloses the method of claim 9 wherein the colour Toyama: [0015, 0067], HSV or HSI. See discussions under 112(b))
Although Hotson {modified by Toyama} teaches RGB images as obtained by a camera (Hotson: [0019-0021]) and HSV or HSI color channels (Toyama: [0015, 0067]), Hotson {modified by Toyama} does not disclose explicitly wherein the pre-processing includes the colour transformation into hue, saturation and green/magenta colour channels (interpreted as HSI or HSV color channels), which is, however, well known and commonly practiced in image processing in the pre-processing stage, for example when acquired image is in RGB color channels. Examiner takes an official notice to this fact.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Hotson {modified by Toyama}’s disclosure with the Official Notice’s teachings by combining the method for use in a vehicle for identifying a feature of the environment of the vehicle (from Hotson {modified by Toyama}) with the technique of transforming an obtained image from its original color channels into HSV or HSI color channels during pre-processing of the obtained image (from the Official Notice) to yield no more than predictable use of prior art elements according to their established functions since all the claimed elements, which are taught by prior art references, would continue to operate in the same manner, particularly, the method for use in a vehicle for identifying a feature of the environment of the vehicle would still work in the way according to Hotson {modified by Toyama} and the technique of transforming an obtained image from its original color channels into HSV or HSI color channels during pre-processing of the obtained image would continue to function as taught by the Official Notice. In fact, the inclusion of the Official Notice's technique would provide a practical and/or alternative implementation of the method from Hotson {modified by Toyama} and thus would enable a better and more flexible method for use in a vehicle for identifying a feature of the environment of the vehicle due to the alternative implementation made available by the Official Notice’s technique.
Therefore, it would have been obvious to combine Hotson {modified by Toyama} with the Official Notice to obtain the invention as specified in claims 9-10.
Regarding claims 11-12, which depends on claim 1, Hotson {modified by Toyama} discloses processing the output value from the neural network over the plurality of iterations (Hotson: Figs. 2-5.), but Hotson {modified by Toyama} does not disclose explicitly averaging the output value from the neural network over the plurality of iterations with first n iterations discounted, which is, however, well known and commonly practiced in the image processing art involving neural networks which take a few iterations to converge or stabilize. Examiner takes an official notice to this fact.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Hotson {modified by Toyama}’s disclosure with the Official Notice’s teachings by combining the method for use in a vehicle for identifying a feature of the environment of the vehicle (from Hotson {modified by Toyama}) with the technique of averaging the output value from the neural network over the plurality of iterations with first n iterations discounted (from the Official Notice) to yield no more than predictable use of prior art elements according to their established functions since all the claimed elements, which are taught by prior art references, would continue to operate in the same manner, particularly, the method for use in a vehicle for identifying a feature of the environment of the vehicle would still work in the way according to Hotson {modified by Toyama} and the technique of averaging the output value from the neural network over the plurality of iterations with first n iterations discounted would continue to function as taught by the Official Notice. In fact, the inclusion of the Official Notice's technique would provide a practical and/or alternative implementation of the method from Hotson {modified by Toyama} and thus would enable a better and more reliable method for use in a vehicle for identifying a feature of the environment of the vehicle due to the practical implementation made available by the Official Notice’s technique to average the output results over multiple iterations while discounting first few iterations.
Therefore, it would have been obvious to combine Hotson {modified by Toyama} with the Official Notice to obtain the invention as specified in claims 11-12.
Regarding claim 13, Hotson {modified by Toyama} discloses the method of claim 1 wherein pre-processing includes at least one of: scaling the original image; reducing the resolution of the original image; and reducing the dimensions of the original image to a one-dimensional array. (Hotson: Fig. 3. “[0039] … the image 300 may be down sampled to process the full image 300 or a larger portion or different scale window 302 of the image 300.”)
Regarding claim 14, Hotson {modified by Toyama} discloses the method of claim 1 wherein: presenting the input image to the neural network includes presenting the input image to multiple neural networks simultaneously or consecutively, wherein each of the multiple neural networks are trained differently; obtaining the output value from the neural network includes obtaining the output values from each of the multiple neural networks; and post-processing the output value from the neural network to identify a feature of the environment of a vehicle includes post-processing each of the output values from the multiple neural networks and combining or comparing the post-processed output values to identify a feature of the environment of a vehicle. (Hotson: [0031-0032, 0037, 0041-0047, 0050]. “[0041] … a plurality of different recurrent neural networks may be used to generate each feature map. For example, a feature map for pedestrian detection may be generated using a neural network trained for pedestrian detection while a feature map for vehicle detection may be generated using a neural network trained for vehicle detection. Thus, a plurality of different feature maps may be generated for the single image 300 shown in FIG. 3.” “[0045] … Thus, recurrent neural networks may be used to generate the feature maps as well as the object predictions.” “[0047] … In one embodiment, a single neural network, or set of neural networks is used during each stage such that the recurrent connections 420, 422 simply feedback outputs from previous frames as input into a current frame.” The claimed combining or comparing is implied in order to classify the contents of an image (“[0032] According to one embodiment, a deep neural network 200 of FIG. 2 may be used to classify the content(s) of an image into four different classes: a first class, a second class, a third class, and a fourth class.”) These neural networks are trained differently due to different tasks or different types of objects that they need to classify. ([0031-0032, 0037, 0050]))
Regarding claim 15, Hotson {modified by Toyama} discloses the method of claim 14 wherein the combining or comparing of the post-processed output values includes combining and/or averaging the output values from each of the multiple neural networks, and/or applying a swarm optimization algorithm to the post-processed output values to identify the feature of the environment the vehicle. (Hotson: [0031-0032, 0041-0047]. At least, the claimed combining is implied as discussed above. )
Regarding claim 16, Hotson {modified by Toyama} discloses the method of claim 1,further including controlling the speed and/or direction of the vehicle based on the identified feature. (Hotson: Figs. 1-2, 5 and 7, [0024-0028, 0033, 0048, 0055]. .)
Claims 17 and 25 are computer-readable medium claims (Hotson: Fig. 8) similarly rejected, respectively, as the method claims 1 and 16 (the claimed 4th and 5th layers are interpreted as part of the disclosed 210 in Fig. 2 of Hotson).
Claims 18 and 23-24 are computer-readable medium claims (Hotson: Fig. 8) similarly rejected, respectively, as the method claims 10, 14 and 2.
Claims 19 is a computer-readable medium claim (Hotson: Fig. 8) similarly rejected as the method claim 4 or 13.
Regarding claim 20, Hotson {modified by Toyama} discloses the perception software stack of claim 17, wherein the input layer of the neural network comprises fewer input nodes than the number of pixels in the input image. (Hotson: implied by [0031, 0038-0039]. For example, “[0032] … For example, larger networks may include an input node 202 for each pixel of an image, and thus may have hundreds, thousands, or other number of input nodes.” This implies that regular networks include an input node for more than one pixel. “[0038] … In one embodiment, the image 300 is too large to be processed at full resolution by an available neural network. Thus, the image may be processed one sub-region at a time.” This implies that the number of input nodes is much smaller than the number of pixels.)
Regarding claim 21, Hotson {modified by Toyama} discloses the perception software stack of claim 20 wherein the neural network comprises 150 input nodes or fewer in the input layer. (Hotson: Fig. 3. [0031-0032, 0038-0039]. As shown in Fig. 3, the sub-region 302 is less than 1/100 of the image 300. Even if we assume that the image 300 include hundreds or thousands of pixels [0031], the sub-region 302 may include a few pixels or a few tens of pixels. Since only “larger networks may include an input node 202 for each pixel of an image” [0031], the claimed “150 input nodes or fewer in the input layer” are implied.)
Regarding claim 22, Hotson {modified by Toyama} discloses the perception software stack of claim 17 wherein the first feedback output comprises two feedback output nodes, wherein the two feedback output nodes are configured to output a first and a second value respectively, the first and second values indicating a starting point in the input image from which to select a next iteration of pixels in the input image to process by the neural network. (Hotson: Figs. 2-4. “”[0038] … For example, the window 302 may start in a corner and then be subsequently moved from point to point to detect features.” The claimed “first and second values” are interpreted as the coordinate values of the “corner” where the window 302 may start.)
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory obviousness-type double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 C.F.R. § 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on a nonstatutory double patenting ground provided the conflicting application or patent either is shown to be commonly owned with this application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement.
Effective January 1, 1994, a registered attorney or agent of record may sign a terminal disclaimer. A terminal disclaimer signed by the assignee must fully comply with 37 C.F.R. § 3.73(b).
Claims 1-2 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1 and 4 of the copending application 18246483. Although the conflicting claims are not identical, they are not patentably distinct from each other because the claims of the instant application are anticipated by the claims of the copending application ‘483.
Claims of instant application
Claims of 18246483
1. (Original): A computer-implemented method for use in a vehicle for identifying a feature of the environment of the vehicle, the method comprising:
receiving an original image from a sensor or camera; pre-processing the original image to produce an input image;
presenting the input image to a neural network;
wherein the neural network is trained to classify a feature in an image presented to it, the neural network having an input layer, a hidden layer and an output layer, the output layer including three outputs:
a first feedback output for selecting pixels from the input image to input at the input layer at each iteration of the neural network;
a second feedback output for selecting a colour channel of the selected pixels to input at the input layer at each iteration; and a third output for outputting an output value indicative of a classification result from the neural network;
obtaining the output value from the neural network; and
post-processing the output value from the neural network to identify a feature of the environment of a vehicle.
1. (Original): A computer device comprising a memory and a processor, the computer device configured to be fitted to a vehicle and to communicate with a camera or sensor, the processor being configured to:
pre-process an image received from the camera or sensor data from the sensor to produce an input image;
present the input image to a neural network stored in the memory of the computer device;
wherein the neural network is trained to classify a feature in an image presented to it, the neural network having an input layer, a hidden layer and an output layer, the output layer including three outputs:
a first feedback output for selecting pixels from the input image to input at the input layer at each iteration of the neural network;
a second feedback output for selecting a colour channel of the selected pixels to input at the input layer at each iteration; and a third output for outputting an output value indicative of a classification result from the neural network;
the processor further configured to obtain the output value from the neural network; and
post-process the output value from the neural network to identify a feature of the environment of a vehicle.
2. (Original): The method of claim 1 wherein the feature is an object, such that the method is for object detection, or wherein the feature is a road or driving surface, such that the method is for road segmentation; or wherein the feature is present in the input image, such that the method is for image classification.
4. The computer device of any preceding claim wherein the neural network stored in the memory is configured to perform one or more specific tasks including image classification, object detection and road segmentation.
The dependent claims 3-16 are rejected as being obvious over the claim 1 of the copending application ‘485 in view of the art of record relied upon in the rejections above, as applied to the claims above.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to FENG NIU whose telephone number is (571)272-9592. The examiner can normally be reached on Monday - Friday, 8am-5pm PT.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chan Park can be reached on (571) 272-7409. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/FENG NIU/Primary Examiner, Art Unit 2669