Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-5 and 7-17 and 19- 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Mei et al. (US 9286524) in view of Yu et al. (US 20230326215 )
Regarding claim 13, Mei teaches an apparatus, comprising: one or more controllers associated with one or more memories, wherein the one or more controllers are configured to cause the apparatus to:
receive an image (claim 1: receive an image of a vehicle environment); generate, using a feature extractor having one or more convolutional layers and taking the image as a first input, one or more representation vectors corresponding to the one or more convolutional layers (claim 1:a convolutional neural network including at least one convolutional layer); apply the one or more self-attention based transformers to a third input that is based on the one or more representation vectors to an indication of one or more drivable areas in the image (claim 1: calculate a prediction regarding whether the image contains a traffic lane and fig. 6: col. 10, lines: 24-40);
Mei does not teach apply one or more self-attention based transformers to a second input that is based on the one or more representation vectors to obtain an indication of one or more objects in the image; output the indication of the one or more objects in the image and the indication of the one or more drivable areas in the image.
Yu teaches apply one or more self-attention based transformers to a second input that is based on the one or more representation vectors to obtain an indication of one or more objects in the image (p0062:Transformer layers 332 can include self-attention layer.p0070: FIGS. 5-6 illustrate example methods 500-600 of using and training machine-learning models for end-to-end identification and tracking of objects); output the indication of the one or more objects in the image and the indication of the one or more drivable areas in the image (p0002: dynamic information (such as information about other vehicles, pedestrians, street lights).
Mei and Yu are combinable because they both deal with autonomous vehicle technology system. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the application to combine the teachings of Mei with the teaching of Yu for purpose of to provide correct instructions to the vehicle controls and the drivetrain (p0002)
Claim 20 has been analyzed and rejected with regard to claim 13 and in accordance with Yu’s further teaching on: A computer-readable memory that contains instructions, which when executed by a processor perform steps in a method (p091).
Regarding claim 1, The structural elements of apparatus claim 13 perform all of the steps of method claim 1. Thus, claim 1 is rejected for the same reasons discussed in the rejection of claim 13.
Regarding claim 14, Mei in view of Yu teaches the apparatus of claim 13, wherein the one or more controllers are further configured to cause the apparatus to: apply the one or more self-attention based transformers to a fourth input based on the one or more representation vectors to obtain an indication of one or more lane lines in the image (Yu:p0002:dynamic information (such as information about other vehicles, pedestrians, street lights, etc.).
The rational applied to the rejection of claim 13 has been incorporated herein.
Regarding claim 15, Mei in view of Yu teaches the apparatus of claim 13, wherein the one or more controllers are further configured to cause the apparatus to: determine an attention matrix based on a key vector and a query vector associated with the one or more representation vectors (yu: p0062:associated with i-th bounding box and form a query vector q.sub.i=W.sub.q.Math.FV.sub.i, a key vector ..);
determine a first feature vector based on the attention matrix and a first value vector associated with the one or more representation vectors (Yu:p0063),
wherein the indication of the one or more objects comprises the first feature vector (Yu:p0020: combined feature vector or feature tensor that is subsequently processed by a neural network (NN) deploying one or more attention blocks. Although, for conciseness, the reference throughout this disclosure is made to feature tensors, it should be understood that the term “feature tensor” encompasses feature vectors, feature matrices, and any applicable representation of digitized features representative of objects); and determine a second feature vector based on the attention matrix and a second value vector associated with the one or more representation vectors, wherein the indication of the one or more drivable areas comprises the second feature vector (Yu: p0062-63: detect objects).
The rational applied to the rejection of claim 13 has been incorporate herein.
Regarding claim 16, Mei in view of Yu teaches the apparatus of claim 15, wherein the one or more controllers are further configured to cause the apparatus to: apply the one or more self-attention based transformers to a fourth input that is based on the one or more representation vectors to obtain an indication of one or more lane lines in the image (Yu:p0025:accurate lane estimation can be performed automatically without a driver input or control, and p0062:transformer layers 332 can include self-attention layers..) wherein generating the indication of the one or more lane lines comprises determining a third feature vector based on the attention matrix and a third value vector associated with the one or more representation vectors, wherein the indication of the one or more lane lines comprises the third feature vector (p0062-63: detect objects).
The rational applied to the rejection of claim 13 has been incorporate herein.
Regarding claim 17, Mei in view of Yu teaches the apparatus of claim 13, wherein the one or more self-attention based transformers comprise an encoder comprising one or more first transformer blocks configured to generate an encoded vector based on the one or more representation vectors and a decoder comprising one or more decoder blocks configured to generate a decoded vector based on the encoded vector (Yu:p0061: SPM 320 can include a number of subnetworks, such as an encoder subnetwork 330), and the indication of one or more drivable areas is based on the decoded vector (Yu:p0061:a decoder subnetwork 340).
The rational applied to the rejection of claim 13 has been incorporate herein.
Regarding claim 19, Mei in view of Yu teaches the apparatus of claim 13, wherein, to receive the image, the one or more controllers are further configured to cause the apparatus to: capture the image from a camera of a vehicle (Yu: p0071); and input the image to a computing platform of the vehicle comprising the feature extractor and the one or more self-attention based transformers (Yu:p0075).
The rational applied to the rejection of claim 13 has been incorporate herein.
Regarding claim 2, The structural elements of apparatus claim 14 perform all of the steps of method claim 2. Thus, claim 2 is rejected for the same reasons discussed in the rejection of claim 14.
Regarding claim 3, The structural elements of apparatus claim 15 perform all of the steps of method claim 3. Thus, claim 3 is rejected for the same reasons discussed in the rejection of claim 15.
Regarding claim 4, The structural elements of apparatus claim 16 perform all of the steps of method claim 4. Thus, claim 4 is rejected for the same reasons discussed in the rejection of claim 16.
Regarding claim 5, The structural elements of apparatus claim 17 perform all of the steps of method claim 5. Thus, claim 5 is rejected for the same reasons discussed in the rejection of claim 17.
Regarding claim 9, The structural elements of apparatus claim 19 perform all of the steps of method claim 9. Thus, claim 9 is rejected for the same reasons discussed in the rejection of claim 19.
Regarding claim 7, Mei in view of Yu teaches the method of claim 5, wherein each decoder block of the one or more decoder blocks comprises a respective second transformer block of one or more second transformer blocks (Yu: p0061: Encoder subnetwork 330 can include a set of transformer layers 332 and a set of feed-forward layers 334.).
The rational applied to the rejection of claim 1 has been incorporated herein.
Regarding claim 8, Mei in view of Yu teaches the method of claim 5, wherein each decoder block of the one or more decoder blocks comprises a respective second convolutional layer of one or more second convolutional layers (Yu: p0063:feed-forward layers 334 can be convolutional layers).
The rational applied to the rejection of claim 1 has been incorporated herein.
Regarding claim 10, Mei teaches the method of claim 9, wherein the one or more objects in the image correspond to one or more second vehicles (col.1, Lines:5-10:other vehicles).
Regarding claim 11, Mei teaches the method of claim 1, further comprising: generating one or more second representation vectors using one or more second convolutional layers taking the one or more representation vectors as a fourth input, wherein the second input comprises the one or more second representation vectors (claim 1).
Regarding claim 12, Mei in view of Yu teaches the method of claim 1, wherein the one or more self-attention based transformers comprise an encoder comprising one or more transformers configured to receive the second input, a decoder comprising one or more second transformers, and one or more feed forward networks, each feed forward network configured to identify a respective object of the one or more objects (Yu:p0061-62).
The rational applied to the rejection of claim 1 has been incorporated herein.
Allowable Subject Matter
8. Claims 6 and 18 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter:
Mei et al. (US 9286524) teaches similar system. However, the closest prior art of record, namely Mei et al. (US 9286524), does not disclose, teach or suggest, the claim limitation, as recited in dependent claim 18.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HELEN Q ZONG whose telephone number is (571)270-1600. The examiner can normally be reached Mon-Fri 9-6.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Merouan, Abderrahim can be reached on (571) 270-5254. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
HELEN ZONG
Primary Examiner
Art Unit 2683
/HELEN ZONG/Primary Examiner, Art Unit 2683