DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-14 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Gao et al. US Pg. Pub. (2022/0164350) referred to hereinafter as Gao.
As per claim 1, Gao teaches a method for setting dynamic triggers for event recordings for a vehicle (see at least abstract, summary, Para 72-74. Fig.3), the method comprising: obtaining sensor data samples captured by one or more sensors of the vehicle (see at least abstract, summary, Para 72-74. Fig.3); generating sensor data embeddings from the sensor data samples, wherein the sensor data embeddings are generated by processing each sensor data sample through one or more sensor data embedding networks that are trained to process sensor data samples and to output a corresponding sensor data embedding for each sensor data sample in a multi-dimensional vector space (see at least abstract, summary, Para 72-74. Fig.3); receiving a query embedding, wherein the query embedding has been generated by processing a query through one or more query embedding networks that are trained to process queries and to output a corresponding query embedding for each query in the multi-dimensional vector space, and wherein each of the one or more query embedding networks is trained in association with one or more of the sensor data embedding networks such that a query embedding of a query that is contextually related to a specific sensor data sample points towards the same point as the sensor data embedding of that sensor data sample within the multi-dimensional vector space (see at least abstract, summary, Para 72-74. Fig.3); identifying one or more sensor data embeddings within the multi-dimensional vector space based on a proximity to the received query embedding within the multi-dimensional vector space (see at least abstract, summary, Para 65-68. Fig.3); and storing sensor data samples represented by the identified one or more sensor data embeddings (see at least abstract, summary, Para 72-74. Fig.3).
As per claim 2, Gao teaches a method according to claim 1, wherein the one or more sensor data embedding networks comprise a plurality of sensor data embedding networks including one sensor data embedding network for a corresponding sensor of the vehicle, wherein the plurality of sensor data embedding networks comprises a first sensor data embedding network trained to process sensor data samples of a first sensor and to output a corresponding sensor data embedding, and wherein each of the other sensor data embedding networks is trained in association with the first sensor data embedding network such that a sensor data embedding generated by the first sensor data embedding network and a sensor data embedding generated by each of the other sensor data embedding networks point towards the same point within the multi-dimensional vector space when the generated sensor data embeddings are contextually, spatially and/or temporally related (see at least abstract, summary, Para 26, 35, 72-74. Fig.1).
As per claim 3, Gao teaches a method according to claim 1, wherein the sensor data embeddings are continuously generated and temporarily stored in a data buffer (see at least abstract, summary, Para 26, 35, 72-74. Fig.1).
As per claim 4, Gao teaches a method according to claim 1, wherein the storing the sensor data samples comprises persistently storing the sensor data samples represented by the identified one or more sensor data embeddings in a data storage unit (see at least abstract, summary, Para 26, 35, 72-74. Fig.1).
As per claim 5, Gao teaches a method according to claim 1, wherein the identifying of the one or more sensor data embeddings comprises identifying the one or more sensor data embeddings that are within a distance value from the obtained query embedding within the multi-dimensional vector space (see at least abstract, summary, Para 11, 26, 35, 72-74. Fig.1).
As per claim 6, Gao teaches a method according to claim 1, further comprising: transmitting the stored sensor data samples to a remote server (see at least abstract, summary, Para 26, 35, 72-74. Fig.1).
As per claim 7, Gao teaches a method according to claim 1, wherein the vehicle comprises an automated driving system, ADS, configured to generate ADS output data samples, the method further comprising: generating ADS data embeddings from the ADS output data samples, wherein the ADS data embeddings are generated by processing each ADS output data sample through one or more ADS data embedding networks that are trained to process ADS output data samples and to output a corresponding ADS data embedding in the multi-dimensional vector space, and wherein each of the one or more ADS data embedding networks are trained in association with one or more of the sensor data embedding networks such that an ADS data embedding of an ADS output data sample that is contextually, spatially and/or temporally related to a specific sensor data sample points towards the same point as the sensor data embedding of that sensor data sample within the multi-dimensional vector space (see at least abstract, summary, Para 26, 35, 49, 57, 72-74. Fig.1); identifying one or more ADS data embeddings within the multi-dimensional vector space based on a proximity to the received query embedding within the multi-dimensional vector space (see at least abstract, summary, Para 26, 35, 49, 57, 72-74. Fig.1); and storing ADS output data samples represented by the identified one or more ADS data embeddings (see at least abstract, summary, Para 26, 35, 49, 57, 72-74. Fig.1).
As per claims 8-14, the limitations of claims 8-14 are similar to the limitations of claims 1-7, therefore they are rejected based on the same rationale.
Conclusion
Please refer to form 892 for cited references.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MUSSA A SHAAWAT whose telephone number is (313)446-6592. The examiner can normally be reached Monday-Friday 9am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Erin Piateski can be reached at 571-270-7429. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MUSSA A SHAAWAT/Primary Examiner, Art Unit 3669