DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 9/6/2024 was filed. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-4, 10, and 12-20 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Chu et al. (12,371,059).
For claims 1, 10 and 14, Chu discloses an apparatus for autonomous vehicle operation comprising a processor, configured to implement a method, comprising:
receiving, by a computer, a set of data from a software that performs driving related operations for an autonomous vehicle, wherein the set of data includes a first set of data related to an autonomous vehicle that operated on a road and a second set of data related to one or more objects located in an environment where the autonomous vehicle operated (Fig. 1, 5, 6, col. 1, ln 5-26, col. 2, ln 43 – col. 3, ln 5, where data logged associated with the vehicle and objects in the environment are received),
wherein the set of data is associated with timestamps, and wherein the set of data is received as part of a test performed with the software; generating a plurality of frames using the timestamps associated with the set of data, wherein each frame includes at least one data from the set of data, and wherein each frame is associated with a unique timestamp (Col. 2, ln 43 – col. 3, ln 5, col. 3, ln 58 – col. 4, ln 4, col. 4, ln 26-64, where individual frames are generated associated with its corresponding time data and set of data, which are unique to each frames);
determining, for each frame, that the at least one data indicates information related to the autonomous vehicle and/or the one or more objects (Col. 2, ln 43-58, col. 3,38-64, 36-64);
assigning, for each frame, a label associated with the information indicated by the at least one data; and displaying, using a graphical user interface (GUI) and for the test performed with the software, at least one label associated with at least one information in a frame (At least in fig. 3, col. 3, ln 58-65, col. 4, ln 26-64, col. 8, ln 13-40, col. 10, ln 53 – col. 11, ln 14, col. 11, ln 28-41, col. 13, ln 12-21, col. 18, ln 1-11, col. 21, Example Clauses A.-G, where labels with data and features related to the agents are assigned and displayed to the user display in each frames).
For claim 2, Chu discloses the method of claim 1, wherein the first set of data is associated with the autonomous vehicle that is operated on the road in a simulation, and wherein the second set of data is associated with the one or more objects that are simulated in the simulation (Col. 1, ln 5-26, col. 2, ln 31-35, 63-67, where the data may be associated with simulated environment).
For claim 3, Chu discloses the method of claim 1, further comprising: determining that the software is validated upon determining that a driving related operation of the autonomous vehicle indicated by the at least one label in the frame is a same as or meets a pre-determined performance requirement (Col. 1, ln 5-26, col. 2, ln 36-58, col. 17, ln 3-35, where the data representing the scenarios including the various information relate to various objects, parameters, values, driving decisions are tested and validated by the software systems).
For claim 4, Chu discloses the method of claim 1, further comprising: determining that the information indicated by the at least one data in each frame is associated with a static label or a dynamic label, wherein the label for the information is assigned based on whether the information is associated with the static label or the dynamic label (Fig. 3, col. 3, ln 38-64, col. 4, ln 26-52, where each frame is associated with labels and feature data relate to each agents such as objects of vehicles, animals, road debris, traffic cones, pot holes or other road features that are static and dynamic).
For claim 12, Chu discloses the apparatus of claim 10, wherein the first set of data include driving related operations of the autonomous vehicle (Col. 1, ln 5-26, col. 2, ln 21-62, where the testing and simulation data are related to operations of the autonomous vehicle).
For claim 13, Chu discloses the apparatus of claim 10, wherein the second set of data related to at least one object includes a speed of the at least one object, a location of the at least one object, or a distance from the autonomous vehicle of the at least one object (Col. 3, ln 60-64, col. 4, ln 53-58, col. 7, ln 4-13, col. 25, ln 38-53).
For claim 15, Chu discloses the non-transitory computer readable program storage medium of claim 14, wherein the method further comprises: storing in a database a plurality of labels and a plurality of rules, wherein each label is stored with a corresponding rule that indicates a content of a data or a pattern of the data associated with that label (Fig. 8, col. 10, ln 9-52, where plurality of rules corresponding to the labels and their data content or pattern of the data are stored in the memory and system).
For claim 16, Chu discloses the non-transitory computer readable program storage medium of claim 14, wherein the method further comprises: determining, from the plurality of frames, a set of frames that comprise a topic indicative of a driving related operation of the autonomous vehicle (Fig. 5-8, col. 2, ln 28 – col. 3, ln 5, col. 3, ln 45-64).
For claim 17, Chu discloses the non-transitory computer readable program storage medium of claim 14, wherein the method further comprises: determining, from the plurality of frames, a set of frames that comprise a topic indicative of a driving related operation of a vehicle located in the environment where the autonomous vehicle operated (Fig. 5-8, col. 2, ln 28 – col. 3, ln 5, col. 3, ln 45-64).
For claim 18, Chu discloses the non-transitory computer readable program storage medium of claim 14, wherein the method further comprises: determining, from the plurality of frames, a set of frames that comprise a topic indicative of a characteristic of an object located in the environment where the autonomous vehicle operated, wherein the one or more objects comprise the object. (Fig. 5-8, col. 2, ln 28 – col. 3, ln 5, col. 3, ln 45-64, where the various scenarios indicative of features of various agents located in the environment where the autonomous vehicle operated).
For claim 19, Chu discloses the non-transitory computer readable program storage medium of claim 14, wherein the method further comprises: determining, from the plurality of frames, a set of frames that comprise a topic indicative of a characteristic of the road on which the autonomous vehicle operated (Fig. 5-8, col. 2, ln 28 – col. 3, ln 5, col. 3, ln 34-64, col. 4, ln 5-25, where the various scenarios indicative of features of various agents located in the environment including road feature data where the autonomous vehicle operated).
For claim 20, Chu discloses the non-transitory computer readable program storage medium of claim 14, wherein the method further comprises: determining that a set of information related to an object from the one or more objects is related to another set of information related to the object in another frame operated (Fig. 5-8, col. 2, ln 28 – col. 3, ln 5, col. 3, ln 34-64, col. 4, ln 53-64, col. 9, ln 12-29, where set of data relate to various objects are analyzed to be related to other data of the objects in multiple frames, maintaining and determining the various scenarios of the vehicle being operated).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim(s) 5-7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chu et al. (12,371,059) as applied to claim 4 above, and further in view of Patel et al. (US 2023/0174103 A1).
For claim 5, Chu discloses the method of claim 4, wherein in response to the information being associated with the static label, the label for the information is assigned (Col. 4, ln 32-52), but does not explicitly disclose assign the label by determining that the information in the frame is described by or related to a first rule from a database that stores the label and the first rule associated with the label. Patel in the same field of the art discloses assign the label by determining that the information in the frame is described by or related to a first rule from a database that stores the label and the first rule associated with the label (Para. 0056-0059, where the labels are assigned to set of classifiers including static and dynamic objects based on corresponding rules for classifying the objects). It would have been obvious for one of ordinary skill in the art before the effective filing date of the present claimed invention to combine the teachings of Chu and Patel to assign the label by determining that the information in the frame is described by or related to a first rule from a database that stores the label and the first rule associated with the label, to label corresponding objects appropriately based on rules.
For claim 6, Chu, as modified, discloses the method of claim 5, wherein the label for the information associated with the static label is assigned without performing a simulation with a scenario (Patel - Para. 0056-0059, where the labeling of the objects are based on stored classifying rules and not from simulations).
For claim 7, Chu discloses the method of claim 4, wherein in response to the information being associated with the dynamic label, the label for the information is assigned (Col. 4, ln 32-52), but does not explicitly disclose assign the label by determining that the information in at least two frames comprising the frame is described by or related to a second rule from a database that stores the label and the second rule associated with the label. (Para. 0056-0059, where the labels are assigned to set of classifiers including static and dynamic objects based on corresponding rules for classifying the objects). It would have been obvious for one of ordinary skill in the art before the effective filing date of the present claimed invention to combine the teachings of Chu and Patel to assign the label by determining that the information in at least two frames comprising the frame is described by or related to a second rule from a database that stores the label and the second rule associated with the label, to label corresponding objects appropriately based on rules
Claim(s) 8 and 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chu et al. (12,371,059) as applied to claim 4 above, and further in view of Balter (US 2022/0055660 A1).
For claim 8, Chu discloses the method of claim 4, but does not specifically disclose the label for the information is assigned using computation resources that are assigned based on whether the information indicated by the at least one data in each frame is associated with the static label or the dynamic label. Balter in the same field of the art discloses the label for the information is assigned using computation resources that are assigned based on whether the information indicated by the at least one data in each frame is associated with the static label or the dynamic label (Para. 0046, 0060, 0111). It would have been obvious for one of ordinary skill in the art before the effective filing date of the present claimed invention to modify the invention of Chu where the label for the information is assigned using computation resources that are assigned based on whether the information indicated by the at least one data in each frame is associated with the static label or the dynamic label, as taught by Balter to improve and prioritize the uses of computation resources.
For claim 9, Chu, as modified, discloses the method of claim 8, wherein a number of the computation resources assigned for the information associated with the static label is less than that assigned for the information associated with the dynamic label (Balter - Para. 0046, 0060, 0111).
Claim(s) 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chu et al. (12,371,059) as applied to claim 4 above, and further in view of Igal Raichelgauz et al. (US 2020/0019793 A1).
For claim 11, Chu discloses the apparatus of claim 10, wherein the at least one data in each frame is determined to include the information related to the autonomous vehicle and/or the one or more objects (Col. 2, ln 28-58, ), but does not specifically disclose performing keyword search on the at least one data in each frame. Igal Raichelgauz in the same field of the art discloses performing keyword search on the at least one data in each frame (Fig. 1, para. 0056). It would have been obvious for one of ordinary skill in the art before the effective filing date of the present claimed invention to modify the invention of Chu performing keyword search on the at least one data in each frame, as taught by Igal Raichelgauz to improve convenient to the user in searching and identifying objects within frames.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
(US 2019/0325224 A1) Lee discloses a system for allowing keyword searching of objects within frames.
(US 2021/0073525 A1) Weinzaepfel et al. discloses a method for searching objects via keywords in image frames.
(US 2022/0335714 A1) Ferzli et al. discloses an autonomous agent prioritizing tasks and allocating computing resources.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Sze-Hon Kong whose telephone number is (571)270-1503. The examiner can normally be reached 9 AM-5 PM Mon-Fri.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Abby Lin can be reached at (571) 270-3976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SZE-HON KONG/Primary Examiner, Art Unit 3657