DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claims 1 thru 20 have been examined.
Drawings
The drawings are objected to as failing to comply with 37 CFR 1.84(p)(5) because they do not include the following reference sign(s) mentioned in the description: From P[006. Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
The drawings are objected to because Figure 6A includes a decision step 622 to determine whether a value is within a range (P[0065]). Figure 6A includes the Yes decision for step 622, but does not show a No decision and the step that should follow the “No”. Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
Specification
The disclosure is objected to because of the following informalities: The first P[0001] references application number 18/082535. This application has since become Patent Number 12,142,061. The patent number should be included to improve the quality of the document.
Appropriate correction is required.
The use of the term BLUETOOTH P[0034], which is a trade name or a mark used in commerce, has been noted in this application. The term should be accompanied by the generic terminology; furthermore the term should be capitalized wherever it appears or, where appropriate, include a proper symbol indicating use in commerce such as ™, SM , or ® following the term.
Although the use of trade names and marks used in commerce (i.e., trademarks, service marks, certification marks, and collective marks) are permissible in patent applications, the proprietary nature of the marks should be respected and every effort made to prevent their use in any manner which might adversely affect their validity as commercial marks.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1 thru 20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 1 recites, “a determination that the road image is ambiguous” in line 6, while line 4 recites “determining whether the road image is ambiguous”. It is unclear if this is a new determination or the earlier determining. The examiner assumes the “determination” refers to the “determining” and should be “the determination”.
Claim 1 recites, “process road images captured by the one or more vehicles” in lines 13 and 14, “obtaining a road image captured by a vehicle” in line 3, and “distributing the autonomous driving model to one or more vehicles” in line 12. The only capturing of images occurs by the “vehicle”, and the “one or more vehicles” only receive the model. It is unclear if the images are captured by the “one or more vehicles” in line 3, or the images are captured by “the vehicle” in lines 13 and 14.
Claim 9 recites, “a determination that the class that has the highest probability belongs to the predefined subset of classes” in lines 3 and 4, while claim 7 also recites, “a determination that the class that has the highest probability belongs to a predefined subset of classes” in lines 4 and 5. It is unclear if this is a new determination or the same determination. The examiner assumes it is the same determination for continued examination.
Claim 11 recites, “a determination that the road image is ambiguous” in line 8, while line 6 recites “determining whether the road image is ambiguous”. It is unclear if this is a new determination or the earlier determining. The examiner assumes the “determination” refers to the “determining” and should be “the determination”.
Claim 11 recites, “process road images captured by the one or more vehicles” in lines 15 and 16, “obtaining a road image captured by a vehicle” in line 5, and “distributing the autonomous driving model to one or more vehicles” in line 14. The only capturing of images occurs by the “vehicle”, and the “one or more vehicles” only receive the model. It is unclear if the images are captured by the “one or more vehicles” in line 5, or the images are captured by “the vehicle” in lines 15 and 16.
Claim 18 recites, “a determination that the road image is ambiguous” in line 5, while line 4 recites “determining whether the road image is ambiguous”. It is unclear if this is a new determination or the earlier determining. The examiner assumes the “determination” refers to the “determining” and should be “the determination”.
Claim 18 recites, “process road images captured by the one or more vehicles” in lines 12 and 13, “obtaining a road image captured by a vehicle” in line 3, and “distributing the autonomous driving model to one or more vehicles” in line 11. The only capturing of images occurs by the “vehicle”, and the “one or more vehicles” only receive the model. It is unclear if the images are captured by the “one or more vehicles” in line 3, or the images are captured by “the vehicle” in lines 12 and 13.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1 thru 6, 10 thru 15 and 18 thru 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Su et al Patent Application Publication Number 2019/0370566 A1 in view of Kraft et al Patent Application Publication Number 2017/0140245 A1 and Gray Patent Application Publication Number 2018/0373263 A1.
Regarding claims 1, 11 and 18 Su et al teach the claimed method for lane marker detection for vehicle driving, an image processing method for lane classification (Figure 3) for a first vehicle 110 traveling on a road (P[0047], Figures 1 and 2), comprising:
the claimed computer system including processor and memory, image processing system P[0048];
the claimed obtaining a road image captured by a vehicle, the camera carried by the vehicle for capturing the images of the road includes consecutive images of the road captured by the camera near range of the vehicle P[0028], the claimed road image having a plurality of regions, “candidate lane markings 510, 520 are detected in the image of a road 500” P[0058], and “respective candidate tracks 530, 540 are divided into a plurality of cells C1-C7” (P[0059], and Figures 4 and 5), the plurality of cells equate to the claimed plurality of regions;
the claimed determining whether the road image is ambiguous for lane marker classification, “the optical local features of a cell C1-C7 are determined by analyzing the gradient pattern of the gray values of pixels corresponding to the respective linear marklet 630 connecting the edges of the candidate lane marking 610, 620” P[0065], “in step 330 of FIG. 3, at least one local feature of cells C1-C7 is determined by analyzing the inlier ratio of each cell. In this example, the inlier ratio of a cell C1-C7 is determined as the ratio between the number of marklets 630 determined for the cell C1-C7 and the number of image pixel lines comprised by the respective cell C1-C7.” (P[0074] and Figure 3), “cells can be checked from the near to far region of the image of a road 200 such as to determine the range of the detected lane marking. It follows that both optical and geometric features of cells C1-C7 can be checked to determine if a cell is part of a valid lane marking or not.” P[0083], and “during optical checking the similarity between a cell and a cell model is calculated. For this purpose the model is initially built from several cells in near range (e.g. within 5 meters as measured on the surface of a road). If the similarity is sufficient, then the cell is considered valid, otherwise it is considered invalid.” P[0084].
Su et al do not explicitly teach the claimed method is performed at a computer system that includes a processor and memory, and the claimed computer readable storage medium for executing programs on the computer system, but do teach an image processing system P[0048]. A person of ordinary skill in the art would understand that an image processing system would typically be a computer system and would include a processor and memory. Additionally, Su et al do not teach the claimed generating a labeled road image that includes identifying lane markers in the road image, and the claimed adding the labeled road image to a corpus of training data for training a model to generate an autonomous driving model. The examiner interprets this as an identification of images in a collection of images used for updating/retraining of the model. Such a limitations would be combined into Su et al as a modification to the storage of the images (adding labels), and act as a learning model to properly identify the lane markings.
Kraft et al teach the claimed method is performed at a computer system that includes a processor and memory, a method for processing images (abstract), a computer system 1300 for executing the processes and includes a processor 1302 and main memory 1304 (Figure 13, P[0095] and P[0097]); and
the claimed computer readable medium storing programs for execution by the processor, “computer readable storage media that stores instructions executable by one or more processing units” P[0024], and “The storage unit 1316 includes a machine-readable medium 1322 on which is stored instructions 1324 (e.g., software) embodying any one or more of the methodologies or functions described herein. The instructions 1324 may also reside, completely or at least partially, within the main memory 1304 or within the processor 1302 (e.g., within a processor's cache memory) during execution thereof by the computer system 1300, the main memory 1304 and the processor 1302 also constituting machine-readable media.” (P[0099] and Figure 13).
Kraft et al further teach:
the claimed generating a labeled road image that includes identifying lane markers in the road image, and the claimed adding the labeled road image to a corpus of training data for training a model to generate an autonomous driving model, “the machine learning model 106 may be an end-to-end recognition system (a non-linear map) that takes raw pixels from the training images 401 directly to internal labels” (P[0069] and Figure 4), the machine learning model is updated based on the traffic information P[0070], and the system includes a training set having training images and training labels P[0071], the set of training images equate to the claimed road images, the updating of the machine learning model equates to the claimed training of the model. The updating of the machine learning model would be applied to the machine learning classifier for classification of lane markings of Su et al.
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine the image processing method for lane classification of Su et al with the computer system, labeling of training images and update of the machine learning model of Kraft et al, with a reasonable expectation of success, in order to provide faster processing of images, and reduce the amount of data transmitted over a network (Kraft et al P[0101]).
Su et al and Kraft et al do not teach the claimed distributing the autonomous driving model to one or more vehicles to facilitate autonomously driving the one or more vehicles. Gray teaches, “The generation of the machine-learned model can be performed by one or more computers and data corresponding to generated machine-learned models can be transmitted to the vehicle.” P[0017], “autonomous-capable vehicle 390, which receives the machine-learned models generated by the training data processing system 300, can also transmit training data 376 to the training data processing system 300” P[0062], “The supervised training sub-system 330 generates supervised training results 331, using which a model generator 360 of the training data processing system 300 can generate or update machine-learned models for identifying drivable space depicted in image frames for use in collision-avoidance systems of autonomous-capable vehicles 390. The supervised training results 331 can comprise image frames that are labeled for drivable space. For instance, each region (e.g., a pixel, a group of pixels, etc.) of each image frame can have an associated label identifying the corresponding region as depicting drivable space or non-drivable space.” P[0064], and “The generated or updated machine-learned model can be transferred or transmitted to autonomous-capable vehicles” P[0087].
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine the image processing method for lane classification of Su et al and the computer system, labeling of training images and update of the machine learning model of Kraft et al with the transmission of the updated machine learned model for autonomous vehicles based on the labelled regions of the images to the vehicles of Gray, with a reasonable expectation of success, in order to improve image quality and improve performance and accuracy of the image analyses (Gray P[0038]).
Regarding claims 2 and 14 Su et al teach the claimed method of claim 1 and the system of claim 11 (see above) wherein determining whether the road image is ambiguous for lane marker classification includes:
the claimed determining a plurality of regions in the road image, “the lane marking is classified as a solid line or as a dashed line based on the local features of a plurality of said cells” P[0023]; and
the claimed classifying each region to have a lane marker classification from a plurality of lane marker classifications, “The lane marking can be classified as a solid line or as a dashed line based on the variance of the inlier ratio for consecutive cells of the candidate track. Here, a low variance of the inlier ratio indicates that a consistent line type is being detected in the consecutive cells, for example indicating that a detected lane marking should be classified as a solid line. By contrast, a high variance of the inlier ratio indicates that an inconsistent line type is being detected in the consecutive cells, for example indicating that a detected lane marking should be classified as a dashed line.” P[0027], lane marking classifier must be able to distinguish a true lane marking from guardrails or curbs captured in the images P[0007] (distinguishing between lane markers and curbs implies determining that it is a curb in the image), the determined local feature of a cell provides an indication if the candidate lane marking is a true lane marking or not P[0017], and “the cell model can define a threshold for the inlier ratio of cells defining if a cell should be considered to match a lane marking segment or not” P[0031].
Regarding claim 3 Su et al teach the claimed plurality of lane marker classifications includes dashed, solid lane, curb and no lane marker classes, “the lane marking is classified as a solid line or as a dashed line based on the local features of a plurality of said cells” P[0023], “The lane marking can be classified as a solid line or as a dashed line based on the variance of the inlier ratio for consecutive cells of the candidate track. Here, a low variance of the inlier ratio indicates that a consistent line type is being detected in the consecutive cells, for example indicating that a detected lane marking should be classified as a solid line. By contrast, a high variance of the inlier ratio indicates that an inconsistent line type is being detected in the consecutive cells, for example indicating that a detected lane marking should be classified as a dashed line.” P[0027], lane marking classifier must be able to distinguish a true lane marking from guardrails or curbs captured in the images P[0007] (distinguishing between lane markers and curbs implies determining that it is a curb in the image), the determined local feature of a cell provides an indication if the candidate lane marking is a true lane marking or not P[0017], and “the cell model can define a threshold for the inlier ratio of cells defining if a cell should be considered to match a lane marking segment or not” P[0031], determining that it is not a lane marking segment equates to the claimed no lane marker class.
Regarding claim 4 Su et al teach the each region corresponds to one respective single pixel of the road image, “Preferably, determining at least one local feature of cells can include determining at least one inlier ratio of the cells. For example, the inlier ratio of a cell can be determined as the ratio between the number of marklets determined for the cell and the maximum number of marklets which are determinable for a single cell, for example the number of image pixel lines comprised by the respective cell.” P[0025], and “the optical local features of a cell C1-C7 are determined by analyzing the gradient pattern of the gray values of pixels corresponding to the respective linear marklet 630 connecting the edges of the candidate lane marking 610, 620. In this example, the gradient patterns of gray values is used to determine the likelihood that the respective pixels connect edges of a candidate lane marking, and thus to provide an indication if the respective cell C1-C7 is truly associated with a lane marking or not” P[0065].
Regarding claim 5 Su et al teach the each region corresponds to at least two pixels of the road image, “a marklet of a cell can be determined as a horizontal row of pixels in the image of a road” P[0014], and “the optical local features of a cell C1-C7 are determined by analyzing the gradient pattern of the gray values of pixels corresponding to the respective linear marklet 630 connecting the edges of the candidate lane marking 610, 620. In this example, the gradient patterns of gray values is used to determine the likelihood that the respective pixels connect edges of a candidate lane marking, and thus to provide an indication if the respective cell C1-C7 is truly associated with a lane marking or not” P[0065].
Regarding claims 6 and 15 Su et al teach the claimed determining probabilities that the respective region should be classified into the plurality of lane marker classifications, “the respective gradient pattern of gray values can be used to determine the likelihood that the respective pixels connect edges of a candidate lane marking” P[0017], “an inlier ratio close to 1 is likely to indicate that the cell is associated with a solid line lane marking, whereas an inlier ratio close to 0 is likely to indicate that the cell is not associated with a solid line lane marking” P[0026], and “the gradient patterns of gray values is used to determine the likelihood that the respective pixels connect edges of a candidate lane marking, and thus to provide an indication if the respective cell C1-C7 is truly associated with a lane marking or not” P[0065], the likelihood equates to the claimed probabilities.
Regarding claim 10 Su et al and Kraft et al do not teach the claimed labeled image includes information identifying lane markers or lanes. Gray teaches, “the system can label regions of the image frames as corresponding to drivable space or non-drivable space” (abstract), and “image analysis 110 can identify drivable space depicted in the processed image frames 106 by analyzing the processed image frames 106 and labeling appropriate pixels in the processed image frames 106 as depicting drivable space. For instance, pixels corresponding to a paved road on which the vehicle 10 is traveling can be labeled by the image analysis 110 as drivable space.” P[0040], the pixels corresponding to a paved road equate to the claimed information identifying one or more lanes in the respective labeled image.
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine the image processing method for lane classification of Su et al and the computer system, labeling of training images and update of the machine learning model of Kraft et al with the labeling of pixels corresponding to a paved road of Gray, with a reasonable expectation of success, in order to improve image quality and improve performance and accuracy of the image analyses (Gray P[0038]).
Regarding claim 12 Su et al do not explicitly teach the claimed one or more vehicles includes the vehicle that captures the road image, but do teach, updating the cell model of the host vehicle to adapt to changing characteristics of the lane markings P[0089]. Gray teaches, “The generation of the machine-learned model can be performed by one or more computers and data corresponding to generated machine-learned models can be transmitted to the vehicle.” P[0017], and “autonomous-capable vehicle 390, which receives the machine-learned models generated by the training data processing system 300, can also transmit training data 376 to the training data processing system 300” P[0062].
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine the image processing method for lane classification of Su et al and the computer system, labeling of training images and update of the machine learning model of Kraft et al with the transmission of the updated machine learned model for autonomous vehicles and receiving updates at the vehicle of Gray, with a reasonable expectation of success, in order to improve image quality and improve performance and accuracy of the image analyses (Gray P[0038]).
Regarding claim 13 Su et al do not teach the claimed transmitting the labeled road image to a remote server to be added to the corpus of training images. Kraft et al teach, “The obtained images are transmitted to the moving vehicle analysis system 101 via the external system interface 201.” P[0059], and “The image store 102 shown in FIG. 4 also receives training images 401 and transmits them to the vehicular path detection module 105. The vehicular path detection module 105 performs edge analysis, as illustrated and described in detail above in FIG. 2, in the training images 401 to identify pairs of edges, each pair of edges corresponding to a vehicular path.” (P[0061] and Figure 4). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine the image processing method for lane classification of Su et al and the updating a machine learned model for autonomous vehicles based on the labelled regions of the images of Gray with the labelling of images in an external system of Kraft et al, with a reasonable expectation of success, in order to provide faster processing of images, and reduce the amount of data transmitted over a network (Kraft et al P[0101]).
Regarding claim 19 Su et al teach the claimed determining whether the road image is ambiguous for lane marker classification includes determining whether each region of the road image is ambiguous for lane marker classification, “each marklet of a cell provides local information about the cell, more specifically, information about a particular local segment of the candidate lane marking” P[0015], “the optical local features of a cell C1-C7 are determined by analyzing the gradient pattern of the gray values of pixels corresponding to the respective linear marklet 630 connecting the edges of the candidate lane marking 610, 620” P[0065], “in step 330 of FIG. 3, at least one local feature of cells C1-C7 is determined by analyzing the inlier ratio of each cell. In this example, the inlier ratio of a cell C1-C7 is determined as the ratio between the number of marklets 630 determined for the cell C1-C7 and the number of image pixel lines comprised by the respective cell C1-C7.” (P[0074] and Figure 3), “cells can be checked from the near to far region of the image of a road 200 such as to determine the range of the detected lane marking. It follows that both optical and geometric features of cells C1-C7 can be checked to determine if a cell is part of a valid lane marking or not.” P[0083], and “during optical checking the similarity between a cell and a cell model is calculated. For this purpose the model is initially built from several cells in near range (e.g. within 5 meters as measured on the surface of a road). If the similarity is sufficient, then the cell is considered valid, otherwise it is considered invalid.” P[0084].
Regarding claim 20 Su et al teach the claimed determining a set of probabilities of classifying each respective region into a plurality of predefined classes, and the claimed whether each region of the plurality of regions of the road image has the ambiguous lane marker classification is determined based on values of the set of probabilities of the respective region, “the lane marking is classified as a solid line or as a dashed line based on the local features of a plurality of said cells” P[0023], “The lane marking can be classified as a solid line or as a dashed line based on the variance of the inlier ratio for consecutive cells of the candidate track. Here, a low variance of the inlier ratio indicates that a consistent line type is being detected in the consecutive cells, for example indicating that a detected lane marking should be classified as a solid line. By contrast, a high variance of the inlier ratio indicates that an inconsistent line type is being detected in the consecutive cells, for example indicating that a detected lane marking should be classified as a dashed line.” P[0027], “the respective gradient pattern of gray values can be used to determine the likelihood that the respective pixels connect edges of a candidate lane marking” P[0017], “an inlier ratio close to 1 is likely to indicate that the cell is associated with a solid line lane marking, whereas an inlier ratio close to 0 is likely to indicate that the cell is not associated with a solid line lane marking” P[0026], and “the gradient patterns of gray values is used to determine the likelihood that the respective pixels connect edges of a candidate lane marking, and thus to provide an indication if the respective cell C1-C7 is truly associated with a lane marking or not” P[0065], the likelihood equates to the claimed probabilities.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1 thru 20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 2, 4, 6 thru 13 and 15 thru 20 of U.S. Patent No. 11,551,459 B1. Although the claims at issue are not identical, they are not patentably distinct from each other because the claim limitations are recited in different orders and arrangements between the application and the patent, and the application claims are recited more broadly than those of the patented claims.
Claims 1 thru 20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1 thru 3 and 5 thru 20 of U.S. Patent No. 12,142,061 B2. Although the claims at issue are not identical, they are not patentably distinct from each other because the claim limitations are recited in different orders and arrangements between the application and the patent, and the application claims are recited using claim language that is/are synonyms to the claim limitations of the patented claims.
Allowable Subject Matter
Claims 7 thru 9, 16 and 17 would be allowable if rewritten to overcome the rejection(s) under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), 2nd paragraph, and to include all of the limitations of the base claim and any intervening claims (also assumes an approved terminal disclaimer).
The following is a statement of reasons for the indication of allowable subject matter: The reason for indicating allowable subject matter over the prior art of record is based on the combined limitations of claims 1, . The reasons for indicating allowable subject matter assume the future submission of an approved terminal disclaimer.
The closest prior art of record is Su et al Patent Application Publication Number 2019/0370566 A1. Su et al disclose an image processing method. The method includes determining a candidate track in an image of a road. The candidate track is modelled as a parameterized line or curve corresponding to a candidate lane marking in the image of a road. The method further includes dividing the candidate track into a plurality of cells. Each cell corresponding to a segment of the candidate track. The method further includes determining at least one marklet for a plurality of said cells. Each marklet of a cell corresponds to a line or curve connecting left and right edges of the candidate lane marking. The method further includes determining at least one local feature of each of said plurality of cells based on characteristics of said marklets, determining at least one global feature of the candidate track by aggregating the local features of the plurality of cells, and determining if the candidate lane marking represents a lane marking based on the at least one global feature.
Regarding claims 1, 2, 6 and 7, Su et al, taken either individually or in combination with other prior art, fails to teach or render obvious a method for lane marker detection for vehicle driving. The method functions using a computer system including one or more processors and memory. The method includes obtaining a road image captured by a vehicle, and determining whether the road image is ambiguous for lane marker classification. In accordance with the determination that the road image is ambiguous for lane marker classification the method performs generating a labeled road image, including identifying one or more lane markers in the road image; adding the labeled road image to a corpus of training data for training a model to generate an autonomous driving model; and distributing the autonomous driving model to one or more vehicles. The autonomous driving model is configured to process road images captured by the vehicle to facilitate at least partially autonomously driving the one or more vehicles. Th method further includes determining whether the road image is ambiguous for lane marker classification includes determining a plurality of regions of the road image, and classifying each region of the plurality of regions to have a lane marker classification from a plurality of lane marker classifications. The classifying includes determining, for each respective region of the plurality of regions, probabilities that the respective region should be classified into the plurality of lane marker classifications. For each region of the plurality of regions in the road image determining a class that has a highest probability. In accordance with a determination that the class that has the highest probability belongs to a predefined subset of classes, incrementing a total lane pixel count. In accordance with a determination that the highest probability is within a predetermined range of values from a first threshold value, incrementing an ambiguous lane pixel count. And for each region, determining a ratio of the ambiguous lane pixel count to the total lane pixel count.
Regarding claims 11 and 14 thru 16, Su et al, taken either individually or in combination with other prior art, fails to teach or render obvious a computer system that includes one or more processors, and memory storing one or more programs configured for execution by the one or more processors. The one or more programs comprising instructions for obtaining a road image captured by a vehicle, and determining whether the road image is ambiguous for lane marker classification. In accordance with the determination that the road image is ambiguous for lane marker classification: generating a labeled road image from the road image, including identifying one or more lane markers in the road image; adding the labeled road image to a corpus of training data for training a model to generate an autonomous driving model; and distributing the autonomous driving model to one or more vehicles. The autonomous driving model is configured to process road images captured by the vehicle to facilitate at least partially autonomously driving the one or more vehicles. The instructions for determining whether the road image is ambiguous for lane marker classification include instructions for determining a plurality of regions of the road image, and classifying each region of the plurality of regions to have a lane marker classification from a plurality of lane marker classifications. The instructions for the classifying include instructions for determining, for each respective region of the plurality of regions, probabilities that the respective region should be classified into the plurality of lane marker classifications. The one or more programs further including instructions for determining, for each region of the plurality of regions, a highest probability from the probabilities. In accordance with a determination that the highest probability is less than a threshold value, classifying the region into a no lane marker class.
Regarding claims 11, 14, 15 and 17, Su et al, taken either individually or in combination with other prior art, fails to teach or render obvious a computer system that includes one or more processors, and memory storing one or more programs configured for execution by the one or more processors. The one or more programs comprising instructions for obtaining a road image captured by a vehicle, and determining whether the road image is ambiguous for lane marker classification. In accordance with the determination that the road image is ambiguous for lane marker classification: generating a labeled road image from the road image, including identifying one or more lane markers in the road image; adding the labeled road image to a corpus of training data for training a model to generate an autonomous driving model; and distributing the autonomous driving model to one or more vehicles. The autonomous driving model is configured to process road images captured by the vehicle to facilitate at least partially autonomously driving the one or more vehicles. The instructions for determining whether the road image is ambiguous for lane marker classification include instructions for determining a plurality of regions of the road image, and classifying each region of the plurality of regions to have a lane marker classification from a plurality of lane marker classifications. The instructions for the classifying include instructions for determining, for each respective region of the plurality of regions, probabilities that the respective region should be classified into the plurality of lane marker classifications. The one or more programs further including instructions for determining, for each region of the plurality of regions, a highest probability from the probabilities. In accordance with a determination that the highest probability is greater than a threshold value determining that the highest probability corresponds to an assigned class, and classifying the respective region into the assigned class.
Related Art
The examiner points to the related art of Shima et al Patent Number 5,555,312 for the recognition of dashed and solid lines on the road (Figures 2 thru 4); Hyun et al PGPub 2020/0125860 A1 for determining probabilities of lane lines (Figure 11) to distinguish between cracks in the pavement and lane lines (Figures 4 thru 6); and Lee et al PGPub 2020/0218908 A1 for generating a confidence map for lane markers and raised pavement markers to verify the lane markings (abstract, Figure 8).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DALE W HILGENDORF whose telephone number is (571)272-9635. The examiner can normally be reached Monday - Friday 9-5:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jelani Smith can be reached at 571-270-3969. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DALE W HILGENDORF/Primary Examiner, Art Unit 3662