DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s arguments with respect to claim(s) 1-20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Applicant's arguments filed 12/24/2025 have been fully considered but they are not persuasive. The Obviousness double Patenting rejection is maintained.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1-2, 11-12 and 16 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-3, and 1-14 of co-pending Application No. 18/365,966 (reference application). Although the claims at issue are not identical, they are not patentably distinct from each other because, claims 1-2, 11-12 and 16 of the instant application are similar in scope and content of the co-pending claims 1-3, and 1-14 of the reference application from the same Applicant.
It is clear that all the elements of the application claims 1-2, 11-12 and 16 are to be found in co-pending claims 1-3, and 1-14 (as the application claims 1-2, 11-12 and 16 fully encompasses co-pending claims 1-3, and 1-14). The difference between the application claims and the co-pending claims lies in the fact that the co-pending claims include many more elements and is thus much more specific. It has been held that the generic invention is “anticipated” by the “species”. See In re Goodman, 29 USPQ2d 2010 (Fed. Cir. 1993). Since application claims 1-2, 11-12 and 16 is anticipated by claims 1-3, and 1-14 of the co-pending Application, it is not patentably distinct from of the co-pending claims.
This is a provisional nonstatutory double patenting rejection because the patentably indistinct claims have not in fact been patented.
Application No: 18/417,105
Application No: 18/365,966
1. (Currently Amended) A method, comprising: extracting a set of observations from sensor data corresponding to a region of a physical environment; identifying local map data corresponding to the region; generating one or more input tokenized descriptions of the set of observations and the local map data generating, based at least on a trained language model processing the one or more input tokenized descriptions,[[a]]one or more output tokenized descriptions indicating one or more differences between the local map data and the set of observations; and determining, based at least on the tokenized description, whether one or more updates are to be performed with respect to the local map data in view of[[based]] on the one or more differences.
1. A computer-implemented method, comprising: generating, based at least on a language model processing data associated with at least a portion of a static or dynamic environment, a tokenized description of at least a portion of the environment, the tokenized description determined based on at least one of semantic, topological, geometric, kinematic and relational information of features data; and performing one or more operations based at least on the tokenized description.
2. The method of claim 1, wherein the set of observations includes at least one of sensor data, captured using one or more sensors in the region, or perception data generated using at least the sensor data.
2. The computer-implemented method of claim 1, wherein the data includes a set of observations including at least one of semantic information, location information, contextual information, geometric information, motion information, state information, or previously generated map data corresponding to at least a portion of the environment.
3. The computer-implemented method of claim 1, wherein the data corresponds to sensor data, a set of feature embeddings determined from the sensor data, an existing map, an internal state recording of a vehicle or a robot, an activity log of human control or interaction.
3. The method of claim 1, wherein the tokenized description is compared with additional tokenized descriptions received that correspond to the region in order to determine, with at least a minimum level of confidence, whether to perform the one or more updates with respect to the local map data.
4. The method of claim 1, wherein the tokenized description includes one or more text-based tokens specific to the one or more differences, the one or more text-based tokens including at least one of a type of difference, a delta indicating an extent of a difference, or a confidence value in a difference determination.
5. The method of claim 1, wherein the set of observations are determined using an ego machine operating in, or proximate to, the region of the physical environment, and wherein the trained language model is located on the ego machine, the ego machine to transmit the tokenized description across at least one network to a system to determine whether to perform the one or more updates.
6. The method of claim 1, wherein potential differences are analyzed for at least two levels of granularity, starting at a higher level of granularity.
7. The method of claim 1, wherein the tokenized description further includes one or more recommended changes to the map data.
8. The method of claim 1, further comprising: receiving information for one or more updates to the local map data; and storing the updated map data for use in at least one of future operation or future difference determinations.
9. The method of claim 1, wherein the tokenized description is written in a road topology language (RTL) or other domain specific language (DSL).
10. The method of claim 1, wherein the tokenized description is determined based at least on at least one of semantic, topological, geometric, kinematic, or relational information of features in the set of observations.
11. A processor, including one or more logical units to: generate a set of observations corresponding to a region of a physical environment; identify local map data corresponding to the region; and generate, based at least on a large language model (LLM) processing data corresponding to the local map data and at least a subset of the set of observations, a tokenized description indicating one or more differences identified between the local map data and the set of observations, wherein the tokenized description is used to determine whether to perform one or more updates to the local map data.
12. A processor, comprising: one or more logical units to use a language model to generate a tokenized text string providing a semantic representation of an environment based, at least in part, on a set of observations of the environment.
12. The processor of claim 11, wherein the set of observations includes at least one of sensor data, captured using one or more sensors in the region, or perception data generated using at least the sensor data.
13. The processor of claim 12, wherein the set of observations includes at least one of semantic information, location information, contextual information, geometric information, or previously generated map data for the environment.
14. The processor of claim 12, wherein the tokenized text string includes: a sequence of tokens corresponding to objects in the environment; and spatial and semantic information of the objects.
13. The processor of claim 11, wherein the tokenized description is compared with additional tokenized descriptions received that correspond to the region in order to determine, with at least a minimum level of confidence, whether to perform the one or more updates with respect to the local map data.
14. The processor of claim 11, wherein the tokenized description includes one or more text-based tokens specific to the one or more differences, the one or more text-based tokens including at least one of a type of difference, a delta indicating an extent of a difference, or a confidence value in a difference determination.
15. The processor of claim 11, wherein the set of observations are determined on an ego machine operating in, or proximate to, the region of the physical environment, and wherein the LLM is located on the ego machine, the ego machine to transmit the tokenized description across at least one network to a system to determine whether to perform the one or more updates.
16. The processor of claim 11, wherein the processor is comprised in at least one of: a system for performing simulation operations; a system for performing simulation operations to test or validate autonomous machine applications; a system for performing digital twin operations; a system for performing light transport simulation; a system for rendering graphical output; a system for performing deep learning operations; a system for performing generative Al operations using a large language model (LLM); a system implemented using an edge device; a system for generating or presenting virtual reality (VR) content; a system for generating or presenting augmented reality (AR) content; a system for generating or presenting mixed reality (MR) content; a system incorporating one or more Virtual Machines (VMs); a system implemented at least partially in a data center; a system for performing hardware testing using simulation; a system for performing generative operations using a language model (LM); a system for synthetic data generation; a collaborative content creation platform for 3D assets; or a system implemented at least partially using cloud computing resources.
11. The simulation system of claim 7, wherein the simulation system comprises at least one of: a system for performing simulation operations; a system for performing simulation operations to test or validate autonomous machine applications; a system for performing digital twin operations; a system for performing light transport simulation; a system for rendering graphical output; a system for performing deep learning operations; a system for performing generative AI operations using a large language model (LLM); a system implemented using an edge device; a system for generating or presenting virtual reality (VR) content; a system for generating or presenting augmented reality (AR) content; a system for generating or presenting mixed reality (MR) content; a system incorporating one or more Virtual Machines (VMs); a system implemented at least partially in a data center; a system for performing hardware testing using simulation; a system for performing generative operations using a large language model (LLM); a system for synthetic data generation; a collaborative content creation platform for 3D assets; or a system implemented at least partially using cloud computing resources.
17. A system comprising: one or more processors to determine one or more updates to map data based at least on one or more differences between the map data and a set of observations for the region, the one or more differences being identified based at least on a language model processing the map data and data corresponding to the set of observations.
18. The system of claim 17, wherein the set of observations includes at least one of sensor data, captured using one or more sensors in the region, or perception data generated using at least the sensor data.
19. The system of claim 17, wherein the map data, the set of observations, and the one or more differences are represented in a domain specific language (DSL) corresponding to a mapping domain.
20. The system of claim 17, wherein the system comprises at least one of: a system for performing simulation operations; a system for performing simulation operations to test or validate autonomous machine applications; a system for performing digital twin operations; a system for performing light transport simulation; a system for rendering graphical output; a system for performing deep learning operations; a system for performing generative Al operations using a large language model (LLM); a system implemented using an edge device; a system for generating or presenting virtual reality (VR) content; a system for generating or presenting augmented reality (AR) content; a system for generating or presenting mixed reality (MR) content; a system incorporating one or more Virtual Machines (VMs); a system implemented at least partially in a data center; a system for performing hardware testing using simulation; a system for performing generative operations using a language model (LM); a system for synthetic data generation; a collaborative content creation platform for 3D assets; or a system implemented at least partially using cloud computing resources.
Claims 1, 9, 11, 17 and 20 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 8, 10, 15, 20 of co-pending Application No. 18/409,018 (reference application). Although the claims at issue are not identical, they are not patentably distinct from each other because, claims 1, 9, 11, 17 and 20 of the instant application are similar in scope and content of the co-pending claims 1, 8, 10, 15, 20 of the reference application from the same Applicant.
It is clear that all the elements of the application claims 1, 9, 11, 17 and 20 are to be found in co-pending claims 1, 8, 10, 15, 20 (as the application claims 1, 9, 11, 17 and 20 fully encompasses co-pending claims 1, 8, 10, 15, 20). The difference between the application claims and the co-pending claims lies in the fact that the co-pending claims include many more elements and is thus much more specific. It has been held that the generic invention is “anticipated” by the “species”. See In re Goodman, 29 USPQ2d 2010 (Fed. Cir. 1993). Since application claims 1, 9, 11, 17 and 20 is anticipated by claims 1, 8, 10, 15, 20 of the co-pending Application, it is not patentably distinct from of the co-pending claims.
This is a provisional nonstatutory double patenting rejection because the patentably indistinct claims have not in fact been patented.
Application No: 18/417,105
Application No: 18/409,018
1. (Currently Amended) A method, comprising: extracting a set of observations from sensor data corresponding to a region of a physical environment; identifying local map data corresponding to the region; generating one or more input tokenized descriptions of the set of observations and the local map data generating, based at least on a trained language model processing the one or more input tokenized descriptions,[[a]]one or more output tokenized descriptions indicating one or more differences between the local map data and the set of observations; and determining, based at least on the tokenized description, whether one or more updates are to be performed with respect to the local map data in view of[[based]] on the one or more differences.
1. A method, comprising: generating, based at least on a language model processing data associated with a set of observations corresponding to at least a portion of an environment, a feature vector corresponding to a tokenized description of at least the portion of the environment; performing, using the feature vector, a similarity search of a set of one or more previously-determined feature vectors to determine one or more similar feature vectors; updating the tokenized description of at least the portion of the environment based in part upon one or more additional observations obtained for at least the portion of the environment until a single similar feature vector is identified though the similarity search; and identifying a geographic location, associated with the single similar feature vector, as a current location in the environment.
2. The method of claim 1, wherein the set of observations includes at least one of sensor data, captured using one or more sensors in the region, or perception data generated using at least the sensor data.
3. The method of claim 1, wherein the tokenized description is compared with additional tokenized descriptions received that correspond to the region in order to determine, with at least a minimum level of confidence, whether to perform the one or more updates with respect to the local map data.
4. The method of claim 1, wherein the tokenized description includes one or more text-based tokens specific to the one or more differences, the one or more text-based tokens including at least one of a type of difference, a delta indicating an extent of a difference, or a confidence value in a difference determination.
5. The method of claim 1, wherein the set of observations are determined using an ego machine operating in, or proximate to, the region of the physical environment, and wherein the trained language model is located on the ego machine, the ego machine to transmit the tokenized description across at least one network to a system to determine whether to perform the one or more updates.
6. The method of claim 1, wherein potential differences are analyzed for at least two levels of granularity, starting at a higher level of granularity.
7. The method of claim 1, wherein the tokenized description further includes one or more recommended changes to the map data.
8. The method of claim 1, further comprising: receiving information for one or more updates to the local map data; and storing the updated map data for use in at least one of future operation or future difference determinations.
9. The method of claim 1, wherein the tokenized description is written in a road topology language (RTL) or other domain specific language (DSL).
8. The method of claim 1, wherein the tokenized description is written in a road topology language (RTL) or other domain specific language (DSL).
10. The method of claim 1, wherein the tokenized description is determined based at least on at least one of semantic, topological, geometric, kinematic, or relational information of features in the set of observations.
11. A processor, including one or more logical units to: generate a set of observations corresponding to a region of a physical environment; identify local map data corresponding to the region; and generate, based at least on a large language model (LLM) processing data corresponding to the local map data and at least a subset of the set of observations, a tokenized description indicating one or more differences identified between the local map data and the set of observations, wherein the tokenized description is used to determine whether to perform one or more updates to the local map data.
10. A processor, comprising: one or more circuits to: generate, based at least on a language model processing data associated with a set of observations corresponding to a current location, a tokenized description of one or more features corresponding to the current location; perform, using the tokenized description, a similarity search of a set of one or more previously-determined tokenized descriptions to determine one or more similar tokenized descriptions; update the tokenized description of the one or more features, corresponding to the current environment, based in part upon one or more additional observations obtained for the current location until a similar previously-determined tokenized description is identified though the similarity search; and identify a geographic location, associated with the similar previously-determined tokenized description, as the current location.
12. The processor of claim 11, wherein the set of observations includes at least one of sensor data, captured using one or more sensors in the region, or perception data generated using at least the sensor data.
13. The processor of claim 11, wherein the tokenized description is compared with additional tokenized descriptions received that correspond to the region in order to determine, with at least a minimum level of confidence, whether to perform the one or more updates with respect to the local map data.
14. The processor of claim 11, wherein the tokenized description includes one or more text-based tokens specific to the one or more differences, the one or more text-based tokens including at least one of a type of difference, a delta indicating an extent of a difference, or a confidence value in a difference determination.
15. The processor of claim 11, wherein the set of observations are determined on an ego machine operating in, or proximate to, the region of the physical environment, and wherein the LLM is located on the ego machine, the ego machine to transmit the tokenized description across at least one network to a system to determine whether to perform the one or more updates.
16. The processor of claim 11, wherein the processor is comprised in at least one of: a system for performing simulation operations; a system for performing simulation operations to test or validate autonomous machine applications; a system for performing digital twin operations; a system for performing light transport simulation; a system for rendering graphical output; a system for performing deep learning operations; a system for performing generative Al operations using a large language model (LLM); a system implemented using an edge device; a system for generating or presenting virtual reality (VR) content; a system for generating or presenting augmented reality (AR) content; a system for generating or presenting mixed reality (MR) content; a system incorporating one or more Virtual Machines (VMs); a system implemented at least partially in a data center; a system for performing hardware testing using simulation; a system for performing generative operations using a language model (LM); a system for synthetic data generation; a collaborative content creation platform for 3D assets; or a system implemented at least partially using cloud computing resources.
15. The processor of claim 14, wherein the processor is comprised in at least one of: a system for performing simulation operations; a system for performing simulation operations to test or validate autonomous machine applications; a system for performing digital twin operations; a system for performing light transport simulation; a system for rendering graphical output; a system for performing deep learning operations; a system for performing generative AI operations using a large language model (LLM); a system implemented using an edge device; a system for generating or presenting virtual reality (VR) content; a system for generating or presenting augmented reality (AR) content; a system for generating or presenting mixed reality (MR) content; a system incorporating one or more Virtual Machines (VMs); a system implemented at least partially in a data center; a system for performing hardware testing using simulation; a system for performing generative operations using a language model (LM); a system for synthetic data generation; a collaborative content creation platform for 3D assets; or a system implemented at least partially using cloud computing resources.
17. A system comprising: one or more processors to determine one or more updates to map data based at least on one or more differences between the map data and a set of observations for the region, the one or more differences being identified based at least on a language model processing the map data and data corresponding to the set of observations.
16. A system comprising: one or more processors to determine a location of a vehicle based in part on a tokenized description of a set of observations obtained for the location, the tokenized description to be used in a similarity search of a set of previously-generated tokenized descriptions to identify a geographic location associated with a most similar result of the similarity search.
18. The system of claim 17, wherein the set of observations includes at least one of sensor data, captured using one or more sensors in the region, or perception data generated using at least the sensor data.
19. The system of claim 17, wherein the map data, the set of observations, and the one or more differences are represented in a domain specific language (DSL) corresponding to a mapping domain.
20. The system of claim 17, wherein the system comprises at least one of: a system for performing simulation operations; a system for performing simulation operations to test or validate autonomous machine applications; a system for performing digital twin operations; a system for performing light transport simulation; a system for rendering graphical output; a system for performing deep learning operations; a system for performing generative Al operations using a large language model (LLM); a system implemented using an edge device; a system for generating or presenting virtual reality (VR) content; a system for generating or presenting augmented reality (AR) content; a system for generating or presenting mixed reality (MR) content; a system incorporating one or more Virtual Machines (VMs); a system implemented at least partially in a data center; a system for performing hardware testing using simulation; a system for performing generative operations using a language model (LM); a system for synthetic data generation; a collaborative content creation platform for 3D assets; or a system implemented at least partially using cloud computing resources.
20. The system of claim 16, wherein the simulation system comprises at least one of: a system for performing simulation operations; a system for performing simulation operations to test or validate autonomous machine applications; a system for performing digital twin operations; a system for performing light transport simulation; a system for rendering graphical output; a system for performing deep learning operations; a system for performing generative AI operations using a large language model (LLM); a system implemented using an edge device; a system for generating or presenting virtual reality (VR) content; a system for generating or presenting augmented reality (AR) content; a system for generating or presenting mixed reality (MR) content; a system incorporating one or more Virtual Machines (VMs); a system implemented at least partially in a data center; a system for performing hardware testing using simulation; a system for performing generative operations using a language model (LM); a system for synthetic data generation; a collaborative content creation platform for 3D assets; or a system implemented at least partially using cloud computing resources.
Claims 1, 2, 9, 11-12, 16-17, and 20 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 6, 8-9, 11-12, 15-16, and 20 of co-pending Application No. 18/472,941 (reference application). Although the claims at issue are not identical, they are not patentably distinct from each other because, claims 1, 2, 9, 11-12, 16-17, and 20 of the instant application are similar in scope and content of the co-pending claims 1, 6, 8-9, 11-12, 15-16, and 20 of the reference application from the same Applicant.
It is clear that all the elements of the application claims 1, 2, 9, 11-12, 16-17, and 20 are to be found in co-pending claims 1, 6, 8-9, 11-12, 15-16, and 20 (as the application claims 1, 2, 9, 11-12, 16-17, and 20 fully encompasses co-pending claims 1, 6, 8-9, 11-12, 15-16, and 20). The difference between the application claims and the co-pending claims lies in the fact that the co-pending claims include many more elements and is thus much more specific. It has been held that the generic invention is “anticipated” by the “species”. See In re Goodman, 29 USPQ2d 2010 (Fed. Cir. 1993). Since application claims 1, 2, 9, 11-12, 16-17, and 20 is anticipated by claims 1, 6, 8-9, 11-12, 15-16, and 20 of the co-pending Application, it is not patentably distinct from of the co-pending claims.
This is a provisional nonstatutory double patenting rejection because the patentably indistinct claims have not in fact been patented.
Application No: 18/417,105
Application No: 18/472,941
1. (Currently Amended) A method, comprising: extracting a set of observations from sensor data corresponding to a region of a physical environment; identifying local map data corresponding to the region; generating one or more input tokenized descriptions of the set of observations and the local map data generating, based at least on a trained language model processing the one or more input tokenized descriptions,[[a]]one or more output tokenized descriptions indicating one or more differences between the local map data and the set of observations; and determining, based at least on the tokenized description, whether one or more updates are to be performed with respect to the local map data in view of[[based]] on the one or more differences.
1. A method, comprising: generating, based at least on a set of observations captured using one or more sensors, a tokenized representation of the set of observations for at least a portion of an environment; generating, based at least on a language model processing the tokenized representation of the set of observations, a tokenized description of at least the portion of the environment, the tokenized description determined based in part on at least one of semantic, topological, geometric, kinematic, or relational information of features in the tokenized representation of the set of observations; and generating a map for at least the portion of the environment using the tokenized description.
2. The method of claim 1, wherein the set of observations includes at least one of sensor data, captured using one or more sensors in the region, or perception data generated using at least the sensor data.
8. The method of claim 1, wherein at least a subset of observations is captured using one or more sensors on a machine positioned in, or moving through, the portion of the environment.
9. The method of claim 1, wherein the sensors include at least one of camera sensors, radar sensors, LiDAR sensors, ultrasonic sensors, or depth sensors.
3. The method of claim 1, wherein the tokenized description is compared with additional tokenized descriptions received that correspond to the region in order to determine, with at least a minimum level of confidence, whether to perform the one or more updates with respect to the local map data.
4. The method of claim 1, wherein the tokenized description includes one or more text-based tokens specific to the one or more differences, the one or more text-based tokens including at least one of a type of difference, a delta indicating an extent of a difference, or a confidence value in a difference determination.
5. The method of claim 1, wherein the set of observations are determined using an ego machine operating in, or proximate to, the region of the physical environment, and wherein the trained language model is located on the ego machine, the ego machine to transmit the tokenized description across at least one network to a system to determine whether to perform the one or more updates.
6. The method of claim 1, wherein potential differences are analyzed for at least two levels of granularity, starting at a higher level of granularity.
7. The method of claim 1, wherein the tokenized description further includes one or more recommended changes to the map data.
8. The method of claim 1, further comprising: receiving information for one or more updates to the local map data; and storing the updated map data for use in at least one of future operation or future difference determinations.
9. The method of claim 1, wherein the tokenized description is written in a road topology language (RTL) or other domain specific language (DSL).
6. The method of claim 5, wherein the tokenized text string is written in a road topology language (RTL) or a domain specific language (DSL).
10. The method of claim 1, wherein the tokenized description is determined based at least on at least one of semantic, topological, geometric, kinematic, or relational information of features in the set of observations.
11. A processor, including one or more logical units to: generate a set of observations corresponding to a region of a physical environment; identify local map data corresponding to the region; and generate, based at least on a large language model (LLM) processing data corresponding to the local map data and at least a subset of the set of observations, a tokenized description indicating one or more differences identified between the local map data and the set of observations, wherein the tokenized description is used to determine whether to perform one or more updates to the local map data.
11. A processor, comprising: one or more circuits to: generate, based at least on a large language model (LLM) processing sensor data obtained using one or more sensors, a tokenized description of at least the portion of an environment, the tokenized description determined based in part on at least one of semantic, topological, geometric, kinematic, or relational information of features represented in the sensor data; and generate, based at least on the tokenized description, a map for at least the portion of the environment.
12. The processor of claim 11, wherein the set of observations includes at least one of sensor data, captured using one or more sensors in the region, or perception data generated using at least the sensor data.
12. The processor of claim 11, wherein the tokenized representation of at least the portion of the environment is generated using a tokenized representation of the sensor data.
13. The processor of claim 11, wherein the tokenized description is compared with additional tokenized descriptions received that correspond to the region in order to determine, with at least a minimum level of confidence, whether to perform the one or more updates with respect to the local map data.
14. The processor of claim 11, wherein the tokenized description includes one or more text-based tokens specific to the one or more differences, the one or more text-based tokens including at least one of a type of difference, a delta indicating an extent of a difference, or a confidence value in a difference determination.
15. The processor of claim 11, wherein the set of observations are determined on an ego machine operating in, or proximate to, the region of the physical environment, and wherein the LLM is located on the ego machine, the ego machine to transmit the tokenized description across at least one network to a system to determine whether to perform the one or more updates.
16. The processor of claim 11, wherein the processor is comprised in at least one of: a system for performing simulation operations; a system for performing simulation operations to test or validate autonomous machine applications; a system for performing digital twin operations; a system for performing light transport simulation; a system for rendering graphical output; a system for performing deep learning operations; a system for performing generative Al operations using a large language model (LLM); a system implemented using an edge device; a system for generating or presenting virtual reality (VR) content; a system for generating or presenting augmented reality (AR) content; a system for generating or presenting mixed reality (MR) content; a system incorporating one or more Virtual Machines (VMs); a system implemented at least partially in a data center; a system for performing hardware testing using simulation; a system for performing generative operations using a language model (LM); a system for synthetic data generation; a collaborative content creation platform for 3D assets; or a system implemented at least partially using cloud computing resources.
15. The processor of claim 11, wherein the processor is comprised in at least one of: a system for performing simulation operations; a system for performing simulation operations to test or validate autonomous machine applications; a system for performing digital twin operations; a system for performing light transport simulation; a system for rendering graphical output; a system for performing deep learning operations; a system for performing generative AI operations using a large language model (LLM); a system implemented using an edge device; a system for generating or presenting virtual reality (VR) content; a system for generating or presenting augmented reality (AR) content; a system for generating or presenting mixed reality (MR) content; a system incorporating one or more Virtual Machines (VMs); a system implemented at least partially in a data center; a system for performing hardware testing using simulation; a system for performing generative operations using a language model (LM); a system for synthetic data generation; a collaborative content creation platform for 3D assets; or a system implemented at least partially using cloud computing resources
17. A system comprising: one or more processors to determine one or more updates to map data based at least on one or more differences between the map data and a set of observations for the region, the one or more differences being identified based at least on a language model processing the map data and data corresponding to the set of observations.
16. A system, comprising: one or more processors to generate a map of an environment based at least on a tokenized description of at least a portion of the environment, the tokenized description generated based at least on a language model processing a set of observations of the environment determined using one or more sensors.
18. The system of claim 17, wherein the set of observations includes at least one of sensor data, captured using one or more sensors in the region, or perception data generated using at least the sensor data.
19. The system of claim 17, wherein the map data, the set of observations, and the one or more differences are represented in a domain specific language (DSL) corresponding to a mapping domain.
20. The system of claim 17, wherein the system comprises at least one of: a system for performing simulation operations; a system for performing simulation operations to test or validate autonomous machine applications; a system for performing digital twin operations; a system for performing light transport simulation; a system for rendering graphical output; a system for performing deep learning operations; a system for performing generative Al operations using a large language model (LLM); a system implemented using an edge device; a system for generating or presenting virtual reality (VR) content; a system for generating or presenting augmented reality (AR) content; a system for generating or presenting mixed reality (MR) content; a system incorporating one or more Virtual Machines (VMs); a system implemented at least partially in a data center; a system for performing hardware testing using simulation; a system for performing generative operations using a language model (LM); a system for synthetic data generation; a collaborative content creation platform for 3D assets; or a system implemented at least partially using cloud computing resources.
20. The system of claim 16, wherein the simulation system comprises at least one of: a system for performing simulation operations; a system for performing simulation operations to test or validate autonomous machine applications; a system for performing digital twin operations; a system for performing light transport simulation; a system for rendering graphical output; a system for performing deep learning operations; a system for performing generative AI operations using a large language model (LLM); a system implemented using an edge device; a system for generating or presenting virtual reality (VR) content; a system for generating or presenting augmented reality (AR) content; a system for generating or presenting mixed reality (MR) content; a system incorporating one or more Virtual Machines (VMs); a system implemented at least partially in a data center; a system for performing hardware testing using simulation; a system for performing generative operations using a language model (LM); a system for synthetic data generation; a collaborative content creation platform for 3D assets; or a system implemented at least partially using cloud computing resources.
Claims 1, 11, 16-17, and 20 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 10, 15-16, and 20 of co-pending Application No. 18/474,591 (reference application). Although the claims at issue are not identical, they are not patentably distinct from each other because, claims 1, 11, 16-17, and 20 of the instant application are similar in scope and content of the co-pending claims 1, 10, 15-16, and 20 of the reference application from the same Applicant.
It is clear that all the elements of the application claims 1, 11, 16-17, and 20 are to be found in co-pending claims 1, 10, 15-16, and 20 (as the application claims 1, 11, 16-17, and 20 fully encompasses co-pending claims 1, 10, 15-16, and 20). The difference between the application claims and the co-pending claims lies in the fact that the co-pending claims include many more elements and is thus much more specific. It has been held that the generic invention is “anticipated” by the “species”. See In re Goodman, 29 USPQ2d 2010 (Fed. Cir. 1993). Since application claims 1, 11, 16-17, and 20 is anticipated by claims 1, 10, 15-16, and 20 of the co-pending Application, it is not patentably distinct from of the co-pending claims.
This is a provisional nonstatutory double patenting rejection because the patentably indistinct claims have not in fact been patented.
Application No: 18/417,105
Application No: 18/474,591
1. (Currently Amended) A method, comprising: extracting a set of observations from sensor data corresponding to a region of a physical environment; identifying local map data corresponding to the region; generating one or more input tokenized descriptions of the set of observations and the local map data generating, based at least on a trained language model processing the one or more input tokenized descriptions,[[a]]one or more output tokenized descriptions indicating one or more differences between the local map data and the set of observations; and determining, based at least on the tokenized description, whether one or more updates are to be performed with respect to the local map data in view of[[based]] on the one or more differences.
1. A method, comprising: generating, based at least on a first language model processing data associated with at least a section of a preliminary map, a tokenized description of the section of the preliminary map; identifying, based in part on the tokenized description, one or more potential issues with respect to the section of the preliminary map; and generating, based at least on a second language model processing data for the one or more potential issues, one or more textual representations or textual recommendations regarding the one or more potential issues.
2. The method of claim 1, wherein the set of observations includes at least one of sensor data, captured using one or more sensors in the region, or perception data generated using at least the sensor data.
3. The method of claim 1, wherein the tokenized description is compared with additional tokenized descriptions received that correspond to the region in order to determine, with at least a minimum level of confidence, whether to perform the one or more updates with respect to the local map data.
4. The method of claim 1, wherein the tokenized description includes one or more text-based tokens specific to the one or more differences, the one or more text-based tokens including at least one of a type of difference, a delta indicating an extent of a difference, or a confidence value in a difference determination.
5. The method of claim 1, wherein the set of observations are determined using an ego machine operating in, or proximate to, the region of the physical environment, and wherein the trained language model is located on the ego machine, the ego machine to transmit the tokenized description across at least one network to a system to determine whether to perform the one or more updates.
6. The method of claim 1, wherein potential differences are analyzed for at least two levels of granularity, starting at a higher level of granularity.
7. The method of claim 1, wherein the tokenized description further includes one or more recommended changes to the map data.
8. The method of claim 1, further comprising: receiving information for one or more updates to the local map data; and storing the updated map data for use in at least one of future operation or future difference determinations.
9. The method of claim 1, wherein the tokenized description is written in a road topology language (RTL) or other domain specific language (DSL).
10. The method of claim 1, wherein the tokenized description is determined based at least on at least one of semantic, topological, geometric, kinematic, or relational information of features in the set of observations.
11. A processor, including one or more logical units to: generate a set of observations corresponding to a region of a physical environment; identify local map data corresponding to the region; and generate, based at least on a large language model (LLM) processing data corresponding to the local map data and at least a subset of the set of observations, a tokenized description indicating one or more differences identified between the local map data and the set of observations, wherein the tokenized description is used to determine whether to perform one or more updates to the local map data.
10. A processor, comprising: one or more circuits to: generate, using a first language model, a tokenized description of at least a section of preliminary map data; use a second language model to determine probability values for individual tokens of the tokenized description; and identify, based at least on the probability values, one or more potential modifications to be made with respect to the section of the preliminary map data.
12. The processor of claim 11, wherein the set of observations includes at least one of sensor data, captured using one or more sensors in the region, or perception data generated using at least the sensor data.
13. The processor of claim 11, wherein the tokenized description is compared with additional tokenized descriptions received that correspond to the region in order to determine, with at least a minimum level of confidence, whether to perform the one or more updates with respect to the local map data.
14. The processor of claim 11, wherein the tokenized description includes one or more text-based tokens specific to the one or more differences, the one or more text-based tokens including at least one of a type of difference, a delta indicating an extent of a difference, or a confidence value in a difference determination.
15. The processor of claim 11, wherein the set of observations are determined on an ego machine operating in, or proximate to, the region of the physical environment, and wherein the LLM is located on the ego machine, the ego machine to transmit the tokenized description across at least one network to a system to determine whether to perform the one or more updates.
16. The processor of claim 11, wherein the processor is comprised in at least one of: a system for performing simulation operations; a system for performing simulation operations to test or validate autonomous machine applications; a system for performing digital twin operations; a system for performing light transport simulation; a system for rendering graphical output; a system for performing deep learning operations; a system for performing generative Al operations using a large language model (LLM); a system implemented using an edge device; a system for generating or presenting virtual reality (VR) content; a system for generating or presenting augmented reality (AR) content; a system for generating or presenting mixed reality (MR) content; a system incorporating one or more Virtual Machines (VMs); a system implemented at least partially in a data center; a system for performing hardware testing using simulation; a system for performing generative operations using a language model (LM); a system for synthetic data generation; a collaborative content creation platform for 3D assets; or a system implemented at least partially using cloud computing resources.
15. The processor of claim 10, wherein the processor is comprised in at least one of: a system for performing simulation operations; a system for performing simulation operations to test or validate autonomous machine applications; a system for performing digital twin operations; a system for performing light transport simulation; a system for rendering graphical output; a system for performing deep learning operations; a system for performing generative AI operations using a large language model (LLM); a system implemented using an edge device; a system for generating or presenting virtual reality (VR) content; a system for generating or presenting augmented reality (AR) content; a system for generating or presenting mixed reality (MR) content; a system incorporating one or more Virtual Machines (VMs); a system implemented at least partially in a data center; a system for performing hardware testing using simulation; a system for performing generative operations using a language model (LM); a system for synthetic data generation; a collaborative content creation platform for 3D assets; or a system implemented at least partially using cloud computing resources.
17. A system comprising: one or more processors to determine one or more updates to map data based at least on one or more differences between the map data and a set of observations for the region, the one or more differences being identified based at least on a language model processing the map data and data corresponding to the set of observations.
16. A system comprising: one or more processors to use a language model to identify one or more modifications to be performed with respect to at least a section of a map based in part on a tokenized description of at least the section of the map.
18. The system of claim 17, wherein the set of observations includes at least one of sensor data, captured using one or more sensors in the region, or perception data generated using at least the sensor data.
19. The system of claim 17, wherein the map data, the set of observations, and the one or more differences are represented in a domain specific language (DSL) corresponding to a mapping domain.
20. The system of claim 17, wherein the system comprises at least one of: a system for performing simulation operations; a system for performing simulation operations to test or validate autonomous machine applications; a system for performing digital twin operations; a system for performing light transport simulation; a system for rendering graphical output; a system for performing deep learning operations; a system for performing generative Al operations using a large language model (LLM); a system implemented using an edge device; a system for generating or presenting virtual reality (VR) content; a system for generating or presenting augmented reality (AR) content; a system for generating or presenting mixed reality (MR) content; a system incorporating one or more Virtual Machines (VMs); a system implemented at least partially in a data center; a system for performing hardware testing using simulation; a system for performing generative operations using a language model (LM); a system for synthetic data generation; a collaborative content creation platform for 3D assets; or a system implemented at least partially using cloud computing resources.
20. The system of claim 16, wherein the simulation system comprises at least one of: a system for performing simulation operations; a system for performing simulation operations to test or validate autonomous machine applications; a system for performing digital twin operations; a system for performing light transport simulation; a system for rendering graphical output; a system for performing deep learning operations; a system for performing generative AI operations using a large language model (LLM); a system implemented using an edge device; a system for generating or presenting virtual reality (VR) content; a system for generating or presenting augmented reality (AR) content; a system for generating or presenting mixed reality (MR) content; a system incorporating one or more Virtual Machines (VMs); a system implemented at least partially in a data center; a system for performing hardware testing using simulation; a system for performing generative operations using a language model (LM); a system for synthetic data generation; a collaborative content creation platform for 3D assets; or a system implemented at least partially using cloud computing resources.
Claims 1, 3-5, 9, 11-12, 16-17 and 20 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-5, 7, 9-12, 15-17, and 20 of co-pending Application No. 18/483,409 (reference application). Although the claims at issue are not identical, they are not patentably distinct from each other because, claims 1, 3-5, 9, 11-12, 16-17 and 20 of the instant application are similar in scope and content of the co-pending claims 1-5, 7, 9-12, 15-17, and 20 of the reference application from the same Applicant.
It is clear that all the elements of the application claims 1, 3-5, 9, 11-12, 16-17 and 20 are to be found in co-pending claims 1-5, 7, 9-12, 15-17, and 20 (as the application claims 1, 3-5, 9, 11-12, 16-17 and 20 fully encompasses co-pending claims 1-5, 7, 9-12, 15-17, and 20). The difference between the application claims and the co-pending claims lies in the fact that the co-pending claims include many more elements and is thus much more specific. It has been held that the generic invention is “anticipated” by the “species”. See In re Goodman, 29 USPQ2d 2010 (Fed. Cir. 1993). Since application claims 1, 3-5, 9, 11-12, 16-17 and 20 is anticipated by claims 1-5, 7, 9-12, 15-17, and 20 of the co-pending Application, it is not patentably distinct from of the co-pending claims.
This is a provisional nonstatutory double patenting rejection because the patentably indistinct claims have not in fact been patented.
Application No: 18/417,105
Application No: 18/483,089
1. (Currently Amended) A method, comprising: extracting a set of observations from sensor data corresponding to a region of a physical environment; identifying local map data corresponding to the region; generating one or more input tokenized descriptions of the set of observations and the local map data generating, based at least on a trained language model processing the one or more input tokenized descriptions,[[a]]one or more output tokenized descriptions indicating one or more differences between the local map data and the set of observations; and determining, based at least on the tokenized description, whether one or more updates are to be performed with respect to the local map data in view of[[based]] on the one or more differences.
1. A method, comprising: training, using a set rules specific to a domain, a language model to generate a tokenized description of at least a portion of an environment associated with the domain; updating one or more parameters of the language model based in part on a plurality of human-authored review entries associated with the domain, the human-authored review entries relating to verification or modification of a respective tokenized description, along with a plaintext description of reasoning for the verification or modification; and providing the language model, after updating the one or more parameters, for use in evaluating one or more additional tokenized descriptions associated with the domain.
2. The method of claim 1, further comprising: providing an additional tokenized description, associated with the domain, as input to the language model; and receiving, as output of the language model, indication of a modification to be made to the tokenized description, along with a plaintext description of reasoning behind the modification.
2. The method of claim 1, wherein the set of observations includes at least one of sensor data, captured using one or more sensors in the region, or perception data generated using at least the sensor data.
3. The method of claim 1, wherein the tokenized description is compared with additional tokenized descriptions received that correspond to the region in order to determine, with at least a minimum level of confidence, whether to perform the one or more updates with respect to the local map data.
2. The method of claim 1, further comprising: providing an additional tokenized description, associated with the domain, as input to the language model; and receiving, as output of the language model, indication of a modification to be made to the tokenized description, along with a plaintext description of reasoning behind the modification.
3. The method of claim 2, wherein the indication of the modification is provided in a tokenized text string.
4. The method of claim 1, wherein the tokenized description includes one or more text-based tokens specific to the one or more differences, the one or more text-based tokens including at least one of a type of difference, a delta indicating an extent of a difference, or a confidence value in a difference determination.
5. The method of claim 1, further comprising: providing, as input to the language model, a proposed modification to an additional tokenized description associated with the domain; and receiving, as output of the language model, verification or rejection of the proposed modification, along with a plaintext description of the reasoning behind the verification or the rejection.
5. The method of claim 1, wherein the set of observations are determined using an ego machine operating in, or proximate to, the region of the physical environment, and wherein the trained language model is located on the ego machine, the ego machine to transmit the tokenized description across at least one network to a system to determine whether to perform the one or more updates.
7. The method of claim 1, wherein the tokenized description corresponds to an object graph for the environment containing a sequence of textual tokens containing semantic, topological, geometric, kinematic, or relational information for one or more objects in the environment.
6. The method of claim 1, wherein potential differences are analyzed for at least two levels of granularity, starting at a higher level of granularity.
7. The method of claim 1, wherein the tokenized description further includes one or more recommended changes to the map data.
9. The method of claim 1, wherein the modification relates to at least one of an addition, deletion, or modification of a map annotation
8. The method of claim 1, further comprising: receiving information for one or more updates to the local map data; and storing the updated map data for use in at least one of future operation or future difference determinations.
9. The method of claim 1, wherein the tokenized description is written in a road topology language (RTL) or other domain specific language (DSL).
4. The method of claim 3, wherein the tokenized text string is in a road topology language (RTL) or a domain specific language (DSL).
10. The method of claim 1, wherein the tokenized description is determined based at least on at least one of semantic, topological, geometric, kinematic, or relational information of features in the set of observations.
11. A processor, including one or more logical units to: generate a set of observations corresponding to a region of a physical environment; identify local map data corresponding to the region; and generate, based at least on a large language model (LLM) processing data corresponding to the local map data and at least a subset of the set of observations, a tokenized description indicating one or more differences identified between the local map data and the set of observations, wherein the tokenized description is used to determine whether to perform one or more updates to the local map data.
10. A processor, comprising: one or more circuits to: provide representation data as input to a trained language model; and generate, using the trained language model, a verification or a modification proposal with respect to the representation data, along with a plaintext description of reasoning behind the verification or modification proposal.
12. The processor of claim 11, wherein the set of observations includes at least one of sensor data, captured using one or more sensors in the region, or perception data generated using at least the sensor data.
11. The processor of claim 10, wherein the language model is trained using rules for a map domain and a set of human-generated map review entries associated with the map domain, the human generated map review entries including human reasoning information in text format.
12. The processor of claim 10, wherein the representation data includes one or more initial modification proposals generated for at least a portion of a representation.
13. The processor of claim 11, wherein the tokenized description is compared with additional tokenized descriptions received that correspond to the region in order to determine, with at least a minimum level of confidence, whether to perform the one or more updates with respect to the local map data.
14. The processor of claim 11, wherein the tokenized description includes one or more text-based tokens specific to the one or more differences, the one or more text-based tokens including at least one of a type of difference, a delta indicating an extent of a difference, or a confidence value in a difference determination.
15. The processor of claim 11, wherein the set of observations are determined on an ego machine operating in, or proximate to, the region of the physical environment, and wherein the LLM is located on the ego machine, the ego machine to transmit the tokenized description across at least one network to a system to determine whether to perform the one or more updates.
16. The processor of claim 11, wherein the processor is comprised in at least one of: a system for performing simulation operations; a system for performing simulation operations to test or validate autonomous machine applications; a system for performing digital twin operations; a system for performing light transport simulation; a system for rendering graphical output; a system for performing deep learning operations; a system for performing generative Al operations using a large language model (LLM); a system implemented using an edge device; a system for generating or presenting virtual reality (VR) content; a system for generating or presenting augmented reality (AR) content; a system for generating or presenting mixed reality (MR) content; a system incorporating one or more Virtual Machines (VMs); a system implemented at least partially in a data center; a system for performing hardware testing using simulation; a system for performing generative operations using a language model (LM); a system for synthetic data generation; a collaborative content creation platform for 3D assets; or a system implemented at least partially using cloud computing resources.
15. The processor of claim 10, wherein the processor is comprised in at least one of: a system for performing simulation operations; a system for performing simulation operations to test or validate autonomous machine applications; a system for performing digital twin operations; a system for performing light transport simulation; a system for rendering graphical output; a system for performing deep learning operations; a system for performing generative AI operations using a large language model (LLM); a system implemented using an edge device; a system for generating or presenting virtual reality (VR) content; a system for generating or presenting augmented reality (AR) content; a system for generating or presenting mixed reality (MR) content; a system incorporating one or more Virtual Machines (VMs); a system implemented at least partially in a data center; a system for performing hardware testing using simulation; a system for performing generative operations using a language model (LM); a system for synthetic data generation; a collaborative content creation platform for 3D assets; or a system implemented at least partially using cloud computing resources.
17. A system comprising: one or more processors to determine one or more updates to map data based at least on one or more differences between the map data and a set of observations for the region, the one or more differences being identified based at least on a language model processing the map data and data corresponding to the set of observations.
16. A system comprising: one or more processors to use a language model to provide one or more quality decisions with respect to generated map data, the one or more quality decisions including a plaintext description of reasoning behind the one or more decisions.
17. The system of claim 16, wherein the one or more processors are further to analyze the generated map data using the language model, wherein the one or more quality decisions relate to at least one of a validation or proposed modification of the generated map data.
18. The system of claim 17, wherein the set of observations includes at least one of sensor data, captured using one or more sensors in the region, or perception data generated using at least the sensor data.
19. The system of claim 17, wherein the map data, the set of observations, and the one or more differences are represented in a domain specific language (DSL) corresponding to a mapping domain.
20. The system of claim 17, wherein the system comprises at least one of: a system for performing simulation operations; a system for performing simulation operations to test or validate autonomous machine applications; a system for performing digital twin operations; a system for performing light transport simulation; a system for rendering graphical output; a system for performing deep learning operations; a system for performing generative Al operations using a large language model (LLM); a system implemented using an edge device; a system for generating or presenting virtual reality (VR) content; a system for generating or presenting augmented reality (AR) content; a system for generating or presenting mixed reality (MR) content; a system incorporating one or more Virtual Machines (VMs); a system implemented at least partially in a data center; a system for performing hardware testing using simulation; a system for performing generative operations using a language model (LM); a system for synthetic data generation; a collaborative content creation platform for 3D assets; or a system implemented at least partially using cloud computing resources.
20. The system of claim 16, wherein the simulation system comprises at least one of: a system for performing simulation operations; a system for performing simulation operations to test or validate autonomous machine applications; a system for performing digital twin operations; a system for performing light transport simulation; a system for rendering graphical output; a system for performing deep learning operations; a system for performing generative AI operations using a large language model (LLM); a system implemented using an edge device; a system for generating or presenting virtual reality (VR) content; a system for generating or presenting augmented reality (AR) content; a system for generating or presenting mixed reality (MR) content; a system incorporating one or more Virtual Machines (VMs); a system implemented at least partially in a data center; a system for performing hardware testing using simulation; a system for performing generative operations using a language model (LM); a system for synthetic data generation; a collaborative content creation platform for 3D assets; or a system implemented at least partially using cloud computing resources.
Claims 1, 6, 9-12, 14, and 16-18 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 3-4, 7-13, 15, and 17-18 of co-pending Application No. 18/502,747 (reference application). Although the claims at issue are not identical, they are not patentably distinct from each other because, claims 1, 6, 9-12, 14, and 16-18 of the instant application are similar in scope and content of the co-pending claims 1, 3-4, 7-13, 15, and 17-18 of the reference application from the same Applicant.
It is clear that all the elements of the application claims 1, 6, 9-12, 14, and 16-18 are to be found in co-pending claims 1, 3-4, 7-13, 15, and 17-18 (as the application claims 1, 6, 9-12, 14, and 16-18 fully encompasses co-pending claims 1, 3-4, 7-13, 15, and 17-18). The difference between the application claims and the co-pending claims lies in the fact that the co-pending claims include many more elements and is thus much more specific. It has been held that the generic invention is “anticipated” by the “species”. See In re Goodman, 29 USPQ2d 2010 (Fed. Cir. 1993). Since application claims 1, 6, 9-12, 14, and 16-18 is anticipated by claims 1, 3-4, 7-13, 15, and 17-18 of the co-pending Application, it is not patentably distinct from of the co-pending claims.
Application No: 18/417,105
Application No: 18/502,747
1. (Currently Amended) A method, comprising: extracting a set of observations from sensor data corresponding to a region of a physical environment; identifying local map data corresponding to the region; generating one or more input tokenized descriptions of the set of observations and the local map data generating, based at least on a trained language model processing the one or more input tokenized descriptions,[[a]]one or more output tokenized descriptions indicating one or more differences between the local map data and the set of observations; and determining, based at least on the tokenized description, whether one or more updates are to be performed with respect to the local map data in view of[[based]] on the one or more differences.
1. A method, comprising: generating, based at least on a language model processing data associated with a set of observations corresponding to at least a portion of an environment, a tokenized description of at least the portion of the environment; generating, based in part on the tokenized description, a feature vector representative of at least the portion of the environment; performing, using the generated feature vector, a similarity search of a set of one or more previously-determined feature vectors to determine one or more similar feature vectors; and associating one or more labels, applied to the one or more similar feature vectors, with at least the portion of the environment.
2. The method of claim 1, wherein the set of observations includes at least one of sensor data, captured using one or more sensors in the region, or perception data generated using at least the sensor data.
3. The method of claim 2, wherein the points in the latent space are members of a cluster determined based in part upon the proximity of the points in the latent space, and wherein the one or more labels are associated with the cluster.
4. The method of claim 3, further comprising: using a clustering algorithm and at least one clustering criterion to perform unsupervised clustering of a set of points in the latent space.
3. The method of claim 1, wherein the tokenized description is compared with additional tokenized descriptions received that correspond to the region in order to determine, with at least a minimum level of confidence, whether to perform the one or more updates with respect to the local map data.
4. The method of claim 1, wherein the tokenized description includes one or more text-based tokens specific to the one or more differences, the one or more text-based tokens including at least one of a type of difference, a delta indicating an extent of a difference, or a confidence value in a difference determination.
5. The method of claim 1, wherein the set of observations are determined using an ego machine operating in, or proximate to, the region of the physical environment, and wherein the trained language model is located on the ego machine, the ego machine to transmit the tokenized description across at least one network to a system to determine whether to perform the one or more updates.
6. The method of claim 1, wherein potential differences are analyzed for at least two levels of granularity, starting at a higher level of granularity.
7. The method of claim 1, wherein the tokenized description includes a tokenized sequence representative of at least the portion of the environment, in which tokens are associated with objects or features, and wherein the feature vector is generated based in part on the tokenized sequence.
7. The method of claim 1, wherein the tokenized description further includes one or more recommended changes to the map data.
8. The method of claim 1, further comprising: receiving information for one or more updates to the local map data; and storing the updated map data for use in at least one of future operation or future difference determinations.
9. The method of claim 1, wherein the tokenized description is written in a road topology language (RTL) or other domain specific language (DSL).
8. The method of claim 1, wherein the tokenized description is written in a road topology language (RTL) or other domain specific language (DSL).
10. The method of claim 1, wherein the tokenized description is determined based at least on at least one of semantic, topological, geometric, kinematic, or relational information of features in the set of observations.
9. The method of claim 1, wherein the tokenized description is determined based on at least one of semantic, topological, geometric, kinematic, or relational information of features in the set of observations.
11. A processor, including one or more logical units to: generate a set of observations corresponding to a region of a physical environment; identify local map data corresponding to the region; and generate, based at least on a large language model (LLM) processing data corresponding to the local map data and at least a subset of the set of observations, a tokenized description indicating one or more differences identified between the local map data and the set of observations, wherein the tokenized description is used to determine whether to perform one or more updates to the local map data.
10. A processor, comprising: one or more circuits to: generate, based at least on a language model processing data associated with a set of observations corresponding to a location, a tokenized description of one or more features corresponding to the location; perform, using the tokenized description, a similarity search of a set of one or more previously-determined tokenized descriptions to determine one or more similar tokenized descriptions; and associate information, corresponding to the one or more similar tokenized descriptions, with the location.
12. The processor of claim 11, wherein the set of observations includes at least one of sensor data, captured using one or more sensors in the region, or perception data generated using at least the sensor data.
11. The processor of claim 10, wherein the information includes at least one of a type of location, rules for the location, or observed behavior for the location.
12. The processor of claim 10, wherein the one or more circuits are further to: use the information to automatically determine one or more operations to perform at the location.
13. The processor of claim 11, wherein the tokenized description is compared with additional tokenized descriptions received that correspond to the region in order to determine, with at least a minimum level of confidence, whether to perform the one or more updates with respect to the local map data.
14. The processor of claim 11, wherein the tokenized description includes one or more text-based tokens specific to the one or more differences, the one or more text-based tokens including at least one of a type of difference, a delta indicating an extent of a difference, or a confidence value in a difference determination.
13. The processor of claim 10, wherein the one or more previously-determined tokenized descriptions correspond to a set of points in a latent space, and wherein the one or more similar tokenized descriptions are determined for the generated tokenized descriptions based in part upon a proximity in the latent space.
15. The processor of claim 11, wherein the set of observations are determined on an ego machine operating in, or proximate to, the region of the physical environment, and wherein the LLM is located on the ego machine, the ego machine to transmit the tokenized description across at least one network to a system to determine whether to perform the one or more updates.
16. The processor of claim 11, wherein the processor is comprised in at least one of: a system for performing simulation operations; a system for performing simulation operations to test or validate autonomous machine applications; a system for performing digital twin operations; a system for performing light transport simulation; a system for rendering graphical output; a system for performing deep learning operations; a system for performing generative Al operations using a large language model (LLM); a system implemented using an edge device; a system for generating or presenting virtual reality (VR) content; a system for generating or presenting augmented reality (AR) content; a system for generating or presenting mixed reality (MR) content; a system incorporating one or more Virtual Machines (VMs); a system implemented at least partially in a data center; a system for performing hardware testing using simulation; a system for performing generative operations using a language model (LM); a system for synthetic data generation; a collaborative content creation platform for 3D assets; or a system implemented at least partially using cloud computing resources.
15. The processor of claim 14, wherein the processor is comprised in at least one of: a system for performing simulation operations; a system for performing simulation operations to test or validate autonomous machine applications; a system for performing digital twin operations; a system for performing light transport simulation; a system for rendering graphical output; a system for performing deep learning operations; a system for performing generative AI operations using a large language model (LLM); a system implemented using an edge device; a system for generating or presenting virtual reality (VR) content; a system for generating or presenting augmented reality (AR) content; a system for generating or presenting mixed reality (MR) content; a system incorporating one or more Virtual Machines (VMs); a system implemented at least partially in a data center; a system for performing hardware testing using simulation; a system for performing generative operations using a language model (LM); a system for synthetic data generation; a collaborative content creation platform for 3D assets; or a system implemented at least partially using cloud computing resources.
17. A system comprising: one or more processors to determine one or more updates to map data based at least on one or more differences between the map data and a set of observations for the region, the one or more differences being identified based at least on a language model processing the map data and data corresponding to the set of observations.
16. A system comprising: one or more processors to determine one or more labels to associate with one or more features associated with a first physical location based at least on generating, based at least on a language model processing data associated with a set of observations corresponding to at least a portion of the first physical location, a tokenized description of at least a portion of the first physical location, and performing a similarity search with respect to one or more previously-generated tokenized descriptions for one or more second physical locations.
18. The system of claim 17, wherein the set of observations includes at least one of sensor data, captured using one or more sensors in the region, or perception data generated using at least the sensor data.
17. The system of claim 16, wherein the one or more processors are further to: use the one or more labels to determine one or more operations to perform at the physical location.
18. The system of claim 16, wherein the one or more previously-determined tokenized descriptions correspond to a set of points in a latent space, and wherein the one or more similar tokenized descriptions are determined for the generated tokenized descriptions based in part upon a proximity in the latent space.
19. The system of claim 17, wherein the map data, the set of observations, and the one or more differences are represented in a domain specific language (DSL) corresponding to a mapping domain.
20. The system of claim 17, wherein the system comprises at least one of: a system for performing simulation operations; a system for performing simulation operations to test or validate autonomous machine applications; a system for performing digital twin operations; a system for performing light transport simulation; a system for rendering graphical output; a system for performing deep learning operations; a system for performing generative Al operations using a large language model (LLM); a system implemented using an edge device; a system for generating or presenting virtual reality (VR) content; a system for generating or presenting augmented reality (AR) content; a system for generating or presenting mixed reality (MR) content; a system incorporating one or more Virtual Machines (VMs); a system implemented at least partially in a data center; a system for performing hardware testing using simulation; a system for performing generative operations using a language model (LM); a system for synthetic data generation; a collaborative content creation platform for 3D assets; or a system implemented at least partially using cloud computing resources.
20. The system of claim 16, wherein the simulation system comprises at least one of: a system for performing simulation operations; a system for performing simulation operations to test or validate autonomous machine applications; a system for performing digital twin operations; a system for performing light transport simulation; a system for rendering graphical output; a system for performing deep learning operations; a system for performing generative AI operations using a large language model (LLM); a system implemented using an edge device; a system for generating or presenting virtual reality (VR) content; a system for generating or presenting augmented reality (AR) content; a system for generating or presenting mixed reality (MR) content; a system incorporating one or more Virtual Machines (VMs); a system implemented at least partially in a data center; a system for performing hardware testing using simulation; a system for performing generative operations using a language model (LM); a system for synthetic data generation; a collaborative content creation platform for 3D assets; or a system implemented at least partially using cloud computing resources.
Claims 1-3, 9, 11, and 16-17 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-2, 6, 10, 12-13, 16-18 of co-pending Application No. 18/590,609 (reference application). Although the claims at issue are not identical, they are not patentably distinct from each other because, claims 1-3, 9, 11, and 16-17 of the instant application are similar in scope and content of the co-pending claims 1-2, 6, 10, 12-13, 16-18 of the reference application from the same Applicant.
It is clear that all the elements of the application claims 1-3, 9, 11, and 16-17 are to be found in co-pending claims 1, 3-4, 7-13, 15, and 17-18 (as the application claims 1-3, 9, 11, and 16-17 fully encompasses co-pending claims 1-2, 6, 10, 12-13, 16-18). The difference between the application claims and the co-pending claims lies in the fact that the co-pending claims include many more elements and is thus much more specific. It has been held that the generic invention is “anticipated” by the “species”. See In re Goodman, 29 USPQ2d 2010 (Fed. Cir. 1993). Since application claims 1-3, 9, 11, and 16-17 is anticipated by claims 1-2, 6, 10, 12-13, 16-18 of the co-pending Application, it is not patentably distinct from of the co-pending claims.
Application No: 18/417,105
Application No: 18/590,609
1. (Currently Amended) A method, comprising: extracting a set of observations from sensor data corresponding to a region of a physical environment; identifying local map data corresponding to the region; generating one or more input tokenized descriptions of the set of observations and the local map data generating, based at least on a trained language model processing the one or more input tokenized descriptions,[[a]]one or more output tokenized descriptions indicating one or more differences between the local map data and the set of observations; and determining, based at least on the tokenized description, whether one or more updates are to be performed with respect to the local map data in view of[[based]] on the one or more differences.
1. A method, comprising: obtaining a set of observations corresponding to a physical environment; identifying local map data that is aligned with the set of observations; and generating, using a trained language model and based at least on the local map data and at least a subset of the set of observations, a tokenized description of at least a portion of the environment, and performing one or more operations corresponding to a machine based at least on the tokenized description.
2. The method of claim 1, wherein the set of observations includes at least one of sensor data, captured using one or more sensors in the region, or perception data generated using at least the sensor data.
2. The method of claim 1, further comprising: capturing, using one or more sensors of the machine, sensor data corresponding to a physical environment of the machine; extracting domain-relevant features from the sensor data using a perception module; and generating the set of observations corresponding to the physical environment.
3. The method of claim 1, wherein the tokenized description is compared with additional tokenized descriptions received that correspond to the region in order to determine, with at least a minimum level of confidence, whether to perform the one or more updates with respect to the local map data.
6. The method of claim 1, wherein the trained language model fuses the local map data and the portion of the set of observations to generate a single, consistent representation of the portion of the environment, and wherein the tokenized description is generated based in part on the single, consistent representation.
4. The method of claim 1, wherein the tokenized description includes one or more text-based tokens specific to the one or more differences, the one or more text-based tokens including at least one of a type of difference, a delta indicating an extent of a difference, or a confidence value in a difference determination.
5. The method of claim 1, wherein the set of observations are determined using an ego machine operating in, or proximate to, the region of the physical environment, and wherein the trained language model is located on the ego machine, the ego machine to transmit the tokenized description across at least one network to a system to determine whether to perform the one or more updates.
6. The method of claim 1, wherein potential differences are analyzed for at least two levels of granularity, starting at a higher level of granularity.
7. The method of claim 1, wherein the tokenized description further includes one or more recommended changes to the map data.
8. The method of claim 1, further comprising: receiving information for one or more updates to the local map data; and storing the updated map data for use in at least one of future operation or future difference determinations.
9. The method of claim 1, wherein the tokenized description is written in a road topology language (RTL) or other domain specific language (DSL).
10. The method of claim 3, wherein the tokenized description is written in a road topology language (RTL) or other domain specific language (DSL).
10. The method of claim 1, wherein the tokenized description is determined based at least on at least one of semantic, topological, geometric, kinematic, or relational information of features in the set of observations.
11. A processor, including one or more logical units to: generate a set of observations corresponding to a region of a physical environment; identify local map data corresponding to the region; and generate, based at least on a large language model (LLM) processing data corresponding to the local map data and at least a subset of the set of observations, a tokenized description indicating one or more differences identified between the local map data and the set of observations, wherein the tokenized description is used to determine whether to perform one or more updates to the local map data.
12. A processor, comprising: one or more circuits to: generate a set of observations corresponding to a physical environment; identify local map data that is aligned with the set of observations; and generate, based at least on a trained language model processing data including the local map data and at least a subset of the set of observations, a tokenized description of at least a portion of the environment.
13. The processor of claim 12, wherein the trained language model is to fuse the local map data and the portion of the set of observations to generate a single, consistent representation of the portion of the environment, and wherein the tokenized description is generated based in part on the single, consistent representation.
12. The processor of claim 11, wherein the set of observations includes at least one of sensor data, captured using one or more sensors in the region, or perception data generated using at least the sensor data.
13. The processor of claim 11, wherein the tokenized description is compared with additional tokenized descriptions received that correspond to the region in order to determine, with at least a minimum level of confidence, whether to perform the one or more updates with respect to the local map data.
14. The processor of claim 11, wherein the tokenized description includes one or more text-based tokens specific to the one or more differences, the one or more text-based tokens including at least one of a type of difference, a delta indicating an extent of a difference, or a confidence value in a difference determination.
15. The processor of claim 11, wherein the set of observations are determined on an ego machine operating in, or proximate to, the region of the physical environment, and wherein the LLM is located on the ego machine, the ego machine to transmit the tokenized description across at least one network to a system to determine whether to perform the one or more updates.
16. The processor of claim 11, wherein the processor is comprised in at least one of: a system for performing simulation operations; a system for performing simulation operations to test or validate autonomous machine applications; a system for performing digital twin operations; a system for performing light transport simulation; a system for rendering graphical output; a system for performing deep learning operations; a system for performing generative Al operations using a large language model (LLM); a system implemented using an edge device; a system for generating or presenting virtual reality (VR) content; a system for generating or presenting augmented reality (AR) content; a system for generating or presenting mixed reality (MR) content; a system incorporating one or more Virtual Machines (VMs); a system implemented at least partially in a data center; a system for performing hardware testing using simulation; a system for performing generative operations using a language model (LM); a system for synthetic data generation; a collaborative content creation platform for 3D assets; or a system implemented at least partially using cloud computing resources.
16. The processor of claim 12, wherein the processor is comprised in at least one of: a system for performing simulation operations; a system for performing simulation operations to test or validate autonomous machine applications; a system for performing digital twin operations; a system for performing light transport simulation; a system for rendering graphical output; a system for performing deep learning operations; a system for performing generative AI operations using a large language model (LLM); a system implemented using an edge device; a system for generating or presenting virtual reality (VR) content; a system for generating or presenting augmented reality (AR) content; a system for generating or presenting mixed reality (MR) content; a system incorporating one or more Virtual Machines (VMs); a system implemented at least partially in a data center; a system for performing hardware testing using simulation; a system for performing generative operations using a language model (LM); a system for synthetic data generation; a collaborative content creation platform for 3D assets; or a system implemented at least partially using cloud computing resources.
17. A system comprising: one or more processors to determine one or more updates to map data based at least on one or more differences between the map data and a set of observations for the region, the one or more differences being identified based at least on a language model processing the map data and data corresponding to the set of observations.
17. A system comprising: one or more processors to use a trained language model to generate a tokenized description of at least a portion of a physical environment based at least on a set of observations corresponding to the physical environment and local map data aligned with the set of observations.
18. The system of claim 17, wherein the trained language model fuses the local map data and at least a portion of the set of observations to generate a single, consistent representation of the portion of the environment, and wherein the tokenized description is generated based at least in part on the single, consistent representation.
18. The system of claim 17, wherein the set of observations includes at least one of sensor data, captured using one or more sensors in the region, or perception data generated using at least the sensor data.
19. The system of claim 17, wherein the map data, the set of observations, and the one or more differences are represented in a domain specific language (DSL) corresponding to a mapping domain.
20. The system of claim 17, wherein the system comprises at least one of: a system for performing simulation operations; a system for performing simulation operations to test or validate autonomous machine applications; a system for performing digital twin operations; a system for performing light transport simulation; a system for rendering graphical output; a system for performing deep learning operations; a system for performing generative Al operations using a large language model (LLM); a system implemented using an edge device; a system for generating or presenting virtual reality (VR) content; a system for generating or presenting augmented reality (AR) content; a system for generating or presenting mixed reality (MR) content; a system incorporating one or more Virtual Machines (VMs); a system implemented at least partially in a data center; a system for performing hardware testing using simulation; a system for performing generative operations using a language model (LM); a system for synthetic data generation; a collaborative content creation platform for 3D assets; or a system implemented at least partially using cloud computing resources.
20. The system of claim 17, wherein the system comprises at least one of: a system for performing simulation operations; a system for performing simulation operations to test or validate autonomous machine applications; a system for performing digital twin operations; a system for performing light transport simulation; a system for rendering graphical output; a system for performing deep learning operations; a system for performing generative AI operations using a large language model (LLM); a system implemented using an edge device; a system for generating or presenting virtual reality (VR) content; a system for generating or presenting augmented reality (AR) content; a system for generating or presenting mixed reality (MR) content; a system incorporating one or more Virtual Machines (VMs); a system implemented at least partially in a data center; a system for performing hardware testing using simulation; a system for performing generative operations using a language model (LM); a system for synthetic data generation; a collaborative content creation platform for 3D assets; or a system implemented at least partially using cloud computing resources.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-8, 10-15 and 17-18 are rejected under 35 U.S.C. 102(a1) as being anticipated by Hong (US 2022/0180056 A1).
As per claim 1, 11 and 17, Hong teaches a method/processor and system, comprising:
extracting a set of observations from sensor data corresponding to a region of a physical environment (0038, 0040 - as responding to queries for location search for a map, relative to the location of a person’s queries);
generating one or more input tokenized descriptions of the set of observations and the local map data (0038);
identifying local map data corresponding to the region (0038) ;
generating, based at least on a trained language model processing data representative of the local map data and at least a subset of the set of observations, a tokenized description indicating one or more differences between the local map data and the set of observations (0038); and
determining, based at least on the tokenized description, whether one or more updates are to be performed with respect to the local map data based on the one or more differences (0063, 0074).
As per claims 2, 12 and 18, Hong teaches the method/processor of claims 1, 11 and 17, wherein the set of observations includes at least one of sensor data, captured using one or more sensors in the region, or perception data generated using at least the sensor data (0038, 0040).
As per claims 3 and 13, Hong teaches the method/processor of claims 1 and 11, wherein the tokenized description is compared with additional tokenized descriptions received that correspond to the region in order to determine, with at least a minimum level of confidence, whether to perform the one or more updates with respect to the local map data (0041, 0056, 0054, 0077-0078).
As per claims 4 and 14, Hong teaches the method/processor of claims 1 and 11, wherein the tokenized description includes one or more text-based tokens specific to the one or more differences, the one or more text-based tokens including at least one of a type of difference, a delta indicating an extent of a difference, or a confidence value in a difference determination (0044, 0055).
As per claims 5 and 15, Hong teaches the method/processor of claims 1 and 11, wherein the set of observations are determined using an ego machine operating in, or proximate to, the region of the physical environment, and wherein the trained language model is located on the ego machine, the ego machine to transmit the tokenized description across at least one network to a system to determine whether to perform the one or more updates (0046, 0054, 0104).
As per claim 6, Hong teaches the method of claim 1, wherein potential differences are analyzed for at least two levels of granularity, starting at a higher level of granularity (0078, 0081).
As per claim 7, Hong teaches the method of claim 1, wherein the tokenized description further includes one or more recommended changes to the map data (0110).
As per claim 8, Hong teaches the method of claim 1, further comprising: receiving information for one or more updates to the local map data (0046, 0054); and storing the updated map data for use in at least one of future operation or future difference determinations (0046, 0054, 0104).
As per claims 10 and 18, Hong teaches the method/system of claims 1 and 17, wherein the tokenized description is determined based at least on at least one of semantic, topological, geometric, kinematic, or relational information of features in the set of observations (0085, 0096, 0038, 0052).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 9 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Hong (US 2022/0180056 A1), in view of Bouguerra et al., (US 2024/0354491 A1).
As per claims 9 and 19, Hong teaches the method and system of claims 1 and 17. Hong, fails to explicitly teach wherein the map data, the set of observations, and the one or more differences are represented in a domain specific language (DSL) corresponding to a mapping domain, and, wherein the tokenized description is written in a road topology language (RTL) or other domain specific language (DSL). However, Bouguerra et al., teach the claimed wherein the map data, the set of observations, and the one or more differences are represented in a domain specific language (DSL) corresponding to a mapping domain, and, wherein the tokenized description is written in a road topology language (RTL) or other domain specific language (DSL) (Bouguerra et al., 0026-0027). Therefore, it would have been obvious to one of ordinary skill in the art at the time of Applicant’s effective filing date to incorporate the teachings of Bouguerra et al., in the method system of Hong, because the structured summary message LLM can be trained to determine in simple bullet points, information from received message(s) (Bouguerra et al., 0033).
Claims 16 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Hong (US 2018/0180056 A1) in view of Li et al., (US 2024/0395261 A1).
Hong et al., while teaching the processor and system of claims 11 and 17, do not specifically teach the claimed wherein the system comprises at least one of: a system for performing simulation operations; a system for performing simulation operations to test or validate autonomous machine applications; a system for performing digital twin operations; a system for performing light transport simulation; a system for rendering graphical output; a system for performing deep learning operations; a system for performing generative Al operations using a large language model (LLM); a system implemented using an edge device; a system for generating or presenting virtual reality (VR) content; a system for generating or presenting augmented reality (AR) content; a system for generating or presenting mixed reality (MR) content; a system incorporating one or more Virtual Machines (VMs); a system implemented at least partially in a data center; a system for performing hardware testing using simulation; a system for performing generative operations using a language model (LM); a system for synthetic data generation; a collaborative content creation platform for 3D assets; or a system implemented at least partially using cloud computing resources, ad per claims 16 and 20. Li et al., however, do teach a system for rendering graphical output; a system for performing deep learning operations (Li et al., 0033); a system for performing generative Al operations using a large language model (LLM) (Li et al., 0024); a system for generating or presenting augmented reality (AR) content (Hong, 0079).
Therefore, it would have been obvious to one of ordinary skill in the art at the time of Applicant’s effective filing date to incorporate the system(s) as taught by Li et al., into the processor/system Hong, because this would advantageously support natural language processing so that the learned personality of the virtual assistant is reflected in the responses returned by the NLP (Li et al., Abstract).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. See attached form PTO-892.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to VIJAY B CHAWAN whose telephone number is (571)272-7601. The examiner can normally be reached 7-5 Monday thru Thursday.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Richemond Dorvil can be reached at 571-272-7602. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/VIJAY B CHAWAN/Primary Examiner, Art Unit 2658