DETAILED ACTION
Notice of Pre-AIA or AIA Status
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
2. Claims 1-20 are presented for examination.
Claim Rejections - 35 USC § 101
3. 35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
3.1 Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 2A- Prong One
The claim(s) recite(s) a method, non-transitory medium, and system (claims 1, 9, and 17) for receiving an original simulation scenario ………, the method, comprising: The step of: “receiving from the trained machine-learning model, a plurality of original simulation scenarios that include features that correspond to the at least one specified attribute, wherein combinations of other attributes in each original simulation scenario are different from each other”, under the broadest reasonable interpretation fall under a mental process. Therefore, the claims are directed to an abstract idea performing data gathering and processing, by use of generic computer components and thus are clearly directed to an abstract idea, as constructed.
Step 2A Prong Two
This judicial exception is not integrated into a practical application because the additional limitation such as: “a non-transitory/memory … medium”, “computing system”, “one or more processor”, “instructions”, either alone or in combination, all serve to gather and process data and do not add anything more significantly to the judicial exception, but are mere instructions to apply the exception using a generic computer component that are well known, routine, and conventional activities (see specification at para [0052]-[0055], and fig.5) which can be of any type, including general-purpose computer (para [0055] Processor 510 can include any general-purpose processor and a hardware service or software service, such as services 532, 534, and 536 stored in storage device 530, configured to control processor 510 as well as a special-purpose processor where software instructions are incorporated into the actual processor design.) previously known in the industries. Merely adding a programmable computer to perform generic computer functions does not automatically overcome an eligibility rejection. Alice, 573 U.S. at 223-24. Furthermore, the use of a general-purpose computer to apply an otherwise ineligible algorithm does not qualify as a particular machine. See Ultramerciallnc. v. Hulu, LLC, 772F.3d 709, 716-17 (Fed. Cir. 20l4); In re TLI Commc 'ns LLC v. AV Automotive, LLC, 823 F.3d 607, 613 (Fed. Cir. 2016) (mere recitation of concrete or tangible components is not an inventive concept); Eon Corp. IP Holdings LLC v. AT&T Mobility LLC, 785; the step of: “providing an input into the trained machine-learning model that describes the at least one specified attribute desired to be present in the original simulation scenario”, under the broadest reasonable interpretation, reasonable fall under data gathering and processing activities that are pre-solution activities” are also well-known, routine and conventional activities to store data in a memory and are not sufficient to amount to significantly more than the judicial exception (See further MPEP 2106.05(d)(i-iv)-f); thus are not patent eligible under 35 USC 101.
Step 2B
The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because, as previously discussed above with reference to the integration of abstract idea into a practical application, the additional elements of: “a non-transitory/memory … medium”, “computing system”, “one or more processor”, “instructions”, either alone or in combination, all serve to gather and process data and do not add anything more significantly to the judicial exception, but are mere instructions to apply the exception using a generic computer component that are well known, routine, and conventional activities (see specification at para [0052]-[0055], and fig.5) which can be of any type, including general-purpose computer (para [0055] Processor 510 can include any general-purpose processor and a hardware service or software service, such as services 532, 534, and 536 stored in storage device 530, configured to control processor 510 as well as a special-purpose processor where software instructions are incorporated into the actual processor design.) previously known in the industries. Merely adding a programmable computer to perform generic computer functions does not automatically overcome an eligibility rejection. Alice, 573 U.S. at 223-24. Furthermore, the use of a general-purpose computer to apply an otherwise ineligible algorithm does not qualify as a particular machine. See Ultramerciallnc. v. Hulu, LLC, 772F.3d 709, 716-17 (Fed. Cir. 20l4); In re TLI Commc 'ns LLC v. AV Automotive, LLC, 823 F.3d 607, 613 (Fed. Cir. 2016) (mere recitation of concrete or tangible components is not an inventive concept); Eon Corp. IP Holdings LLC v. AT&T Mobility LLC, 785; the step of: “providing an input into the trained machine-learning model that describes the at least one specified attribute desired to be present in the original simulation scenario”, under the broadest reasonable interpretation, reasonable fall under data gathering and processing activities that are pre-solution activities” are also well-known, routine and conventional activities to store data in a memory and are not sufficient to amount to significantly more than the judicial exception (See further MPEP 2106.05(d)(i-iv)-f); thus are not patent eligible under 35 USC 101. Therefore, using computer components amount to no more than mere instructions to perform the abstract, and thus are not sufficient to amount to significantly more than the recited abstract, as constructed.
3.2 Dependent claims 2-8, 10-16, 18-20 merely include limitations pertaining to further mathematical computations (claim 2, 10, and 18), “wherein the trained machine-learning model becomes trained by the method comprising: “inputting a training set of historical simulations with labeled attributes into a machine-learning model; inputting the at least one specified attribute into the machine-learning model” (data gathering and processing);” receiving original simulations from the machine-learning model; evaluating the original simulations from the machine-learning model against a golden set of simulations including the at least one specified attribute” (mental process); and “providing loss values to the machine-learning model to encourage the machine-learning model to output the original simulations that are similar to the golden set and discourage the original simulations that are not similar to the golden set” (data gathering and processing). (claims 3, 11, and 19); “wherein the input is a phrase or sentence, the method further comprising: determining, via natural language processing, keywords from the input; and correlating each keyword with the at least one specified attribute based on a lexicon database for a list of attributes” (mental process); (claims 4, 12, 20); “wherein the input includes two or more specified attributes selected based on the input that describes the two or more specified attributes, and wherein the trained machine-learning model is trained to output each original simulation to include the two or more specified attributes” (data gathering and processing or otherwise a mental); (claims 5 and 13); “wherein the other attributes in the original simulations include at least one of a failed function of an autonomous vehicle control stack or an adjustment of the autonomous vehicle control stack while the autonomous vehicle control stack is navigating the original simulation scenarios” (data gathering and processing or otherwise a mental); (claims 6 and 14) “executing simulations using the plurality of original simulation scenarios for an autonomous vehicle control stack to navigate” (mental process); (claims 7 and 15); “failing one or more functions of the autonomous vehicle control stack or adding an adjustment of the autonomous vehicle control stack while the autonomous vehicle control stack is navigating each original simulation scenario” (mental process); (claims 8 and 16) “based on running the original simulation scenarios, determine a feature that needs improvement based on simulated responses by an autonomous vehicle) (mental process), all of which further amount to further mental process similar to that already recited by the independent claims and already addressed above and thus are further not patent eligible under 35 USC 101.
3.3 Claims 9-16 are further rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. Claim 9 along with its dependencies provide for a “non-transitory computer readable media comprising instructions” that, when executed by a computing system, causing the computing system to …; which could be interprets as software per se or merely software instructions within the media, as constructed, and that there is no evidence that a processor nor computing system are even used to execute said instruction to perform the claimed steps.
Claim Rejections - 35 USC § 103
4. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
5. Claim(s) 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Hakimi et al. (USPG_PUB No. 2023/0185997), in view of Wyrwas et al. (USPG_PUB No. 2022/0012388).
5.1 In considering claims 1, 9, and 17, Hakimi et al. teaches a method for receiving an original simulation scenario having at least one specified attribute from a trained machine-learning model, the method comprising:
providing an input into the trained machine-learning model that describes the at least one specified attribute desired to be present in the original simulation scenario (see para [0050], In these aspects of the present disclosure, the user configures each stakeholder, providing details about the identity and known preferences of the individuals being simulated. In addition to specific traits being configured, the persona models 313 of FIG. 3 accept biographical information and information from patent databases. The user may also input past feedback from the stakeholders for the user's own designs or those of others. [0052] A product design process 450 of FIG. 4 begins at block 452, in which a product design is manually provided to an automated analysis module. For example, a written description of the potential product is provided as an input to the stakeholder simulation engine 314 of FIG. 3. In some aspects of the present disclosure, the stakeholder simulation engine 314 takes a description of the design (text, image, or speech) as input. Using a machine learning algorithm, the simulation engine generates a series of scores from each persona for the design description.); and receiving from the trained machine-learning model, a plurality of original simulation scenarios that include features that correspond to the at least one specified attribute (see para [0052] A product design process 450 of FIG. 4 begins at block 452, in which a product design is manually provided to an automated analysis module. For example, a written description of the potential product is provided as an input to the stakeholder simulation engine 314 of FIG. 3. In some aspects of the present disclosure, the stakeholder simulation engine 314 takes a description of the design (text, image, or speech) as input. Using a machine learning algorithm, the simulation engine generates a series of scores from each persona for the design description. [0062] Referring again to FIG. 6, at block 604, the plurality of stakeholder personas are simulated, using the plurality of stakeholder models, in the product review process of a potential product. For example, as shown in FIG. 3, product design module 310 includes the stakeholder simulation engine 314 configured to simulate, using the plurality of stakeholder models, the plurality of stakeholder personas in the product review process of a potential product. For example, the stakeholder simulation engine 314 takes a description of the design (text, image, or speech) as input. Using a machine learning algorithm, the stakeholder simulation engine 314 generates a series of scores from each persona for the design description.); but does not state that wherein combinations of other attributes in each original simulation scenario are different from each other.
Wyrwas et al. teaches a method for the generation of autonomous simulation data (see title), including creation of one or more simulation scenarios used to produce machine learning models to control autonomous vehicles (see abstract), wherein combinations of other attributes in each original simulation scenario are different from each other (see para [0053], In some implementations, an individual simulation may include a variety of simulation scenarios that describe a set of tests of different specific encounters between an autonomous vehicle, its environment, and other moving and non-moving actors (e.g., other vehicles, other autonomous vehicles, pedestrians, animals, machinery like traffic lights, gates, drawbridges, and non-human moveable things like debris, etc.). further [0089], [0104] A simulation scenario generation block 1510 generates simulation scenarios. In some implementations, the scenarios are generated at least in part based on the augmented data. In some implementations, an individual simulation may include a variety of simulation scenarios that describe a set of tests of different specific encounters between an autonomous vehicle 100, its environment, and other actors.).
Hakimi et al. and Wyrwas et al. are analogous art because they are from the same field of endeavor and that the model analyzes by Wyrwas et al. is similar to that of Hakimi et al. Therefore, it would have been obvious to a person of skilled in the art to combine the method of Wyrwas et al. with that of Hakimi et al. because Wyrwas et al. teaches the improvement of accuracy of machine learning (see [0083]).
5.2 Regarding claims 2, 10, and 18, the combined teachings of Hakimi et al. and Wyrwas et al. teaches that wherein the trained machine-learning model becomes trained (see abstract, The method includes training a neural network to simulate a plurality of stakeholder personas in a product review process to provide a plurality of stakeholder models. [0061], A method 600 of FIG. 6 begins at block 602, in which a neural network is trained to simulate a plurality of stakeholder personas in a product review process to provide a plurality of stakeholder models.) by the method comprising: inputting a training set of historical simulations with labeled attributes into a machine-learning model (see Hakimi et al. para [0050], In addition to specific traits being configured, the persona models 313 of FIG. 3 accept biographical information and information from patent databases. The user may also input past feedback from the stakeholders for the user's own designs or those of others. [0062], For example, the stakeholder simulation engine 314 takes a description of the design (text, image, or speech) as input. Using a machine learning algorithm, the stakeholder simulation engine 314 generates a series of scores from each persona for the design description.); inputting the at least one specified attribute into the machine-learning model (see Hakimi et al. para [0052], For example, a written description of the potential product is provided as an input to the stakeholder simulation engine 314 of FIG. 3. In some aspects of the present disclosure, the stakeholder simulation engine 314 takes a description of the design (text, image, or speech) as input. Using a machine learning algorithm, the simulation engine generates a series of scores from each persona for the design description.); receiving original simulations from the machine-learning model (see Hakimi et al. para [0062] Referring again to FIG. 6, at block 604, the plurality of stakeholder personas are simulated, using the plurality of stakeholder models, in the product review process of a potential product. For example, as shown in FIG. 3, product design module 310 includes the stakeholder simulation engine 314 configured to simulate, using the plurality of stakeholder models, the plurality of stakeholder personas in the product review process of a potential product. For example, the stakeholder simulation engine 314 takes a description of the design (text, image, or speech) as input. Using a machine learning algorithm, the stakeholder simulation engine 314 generates a series of scores from each persona for the design description. The stakeholder simulation engine 314 may also maintain representations of previous interactions with the user, as shown in FIG. 5.); evaluating the original simulations from the machine-learning model against a golden set of simulations including the at least one specified attribute (Hakimi et al. [0061] FIG. 6 is a flowchart illustrating a method for machine-assisted collaborative product design, according to aspects of the present disclosure. A method 600 of FIG. 6 begins at block 602, in which a neural network is trained to simulate a plurality of stakeholder personas in a product review process to provide a plurality of stakeholder models. For example, according to the configuration of the product design module 310 shown in FIG. 3, the stakeholder persona training module 312 is configured to train a neural network to simulate a plurality of stakeholder personas in a product review process to provide a plurality of stakeholder models, such as the persona models 313. As shown in FIG. 4, updating of the stakeholder models at block 420 may be performed by the stakeholder persona training module 312 using the persona models 313. The persona models 313 may provide a configurable model of stakeholder personas based on extensive ethnographic and behavioral science research and contain representations of multiple stakeholders typically involved in the product design process. Further Wyrwas et al. para [0078], For example, the simulation scenario may correspond to a perception simulation scenario that imitates the operation of the perception subsystem 154 or a planning simulation scenario that imitates the operation of the planning subsystem 156 of the autonomous vehicle 100. In some implementations, the scenario production engine 206 sends a simulation identifier to the simulator 208. The simulator 208 uses the simulation identifier to fetch a configuration of a matching simulation scenario from the simulation data 212 and executes a simulation based on the fetched simulation scenario configuration.); and providing loss values to the machine-learning model to encourage the machine-learning model to output the original simulations that are similar to the golden set and discourage the original simulations that are not similar to the golden set (see Wyrwas et al. para [0083], In some implementation, the simulation results/messages from the simulator 208 are used as a source of training data for a machine learning engine 166 used to train machine learning model 224. The machine learning engine 166 retrieves a base model 602 and uses the simulation data to train the base model 602 and generate a trained machine learning model 224. The simulation data may be repeatedly and iteratively used to improve the accuracy of the machine learning model 224 as represented by line 604 to and from the machine learning engine 166 in FIG. 6. Thus, the simulation data may also be used for re-training or refinement of the machine learning model 224. The improved machine learning model 224 can in turn be used by the perception subsystem 154, for example.). Therefore, it would have been obvious to a person of skilled in the art to combine the method of Wyrwas et al. with that of Hakimi et al. because Wyrwas et al. teaches the improvement of accuracy of machine learning (see [0083]).
5.3 With regards to claims 3, 11, and 19, the combined teachings of Hakimi et al. and Wyrwas et al. teaches that wherein the input is a phrase or sentence (see Hakimi et al. para [0052], For example, a written description of the potential product is provided as an input to the stakeholder simulation engine 314 of FIG. 3. In some aspects of the present disclosure, the stakeholder simulation engine 314 takes a description of the design (text, image, or speech) as input.), the method further comprising: determining, via natural language processing, keywords from the input (see Hakimi et al. para [0038], natural language processor (NLP) 340) used to determine said keywords from the input); and correlating each keyword with the at least one specified attribute based on a lexicon database for a list of attributes (see Hakimi et al.’s natural language processor (NLP) 340); further in Wyrwas et al. [0078], the simulation scenario may correspond to a perception simulation scenario that imitates the operation of the perception subsystem 154 or a planning simulation scenario that imitates the operation of the planning subsystem 156 of the autonomous vehicle 100. In some implementations, the scenario production engine 206 sends a simulation identifier to the simulator 208. The simulator 208 uses the simulation identifier to fetch a configuration of a matching simulation scenario from the simulation data 212 and executes a simulation based on the fetched simulation scenario configuration.). Therefore, it would have been obvious to a person of skilled in the art to combine the method of Wyrwas et al. with that of Hakimi et al. because Wyrwas et al. teaches the improvement of accuracy of machine learning (see [0083]).
5.4 As per claims 4, 12, and 20, the combined teachings of Hakimi et al. and Wyrwas et al. teaches that wherein the input includes two or more specified attributes selected based on the input that describes the two or more specified attributes (see Wyrwas et al. para [0077], In some implementations, the platform file 500 includes a configuration file that defines input files, configured variations of targets in the augmented data, metadata tags to define attributes of added actors such as a number of pedestrians, and other information required to generate changes in state in the scenario. In some implementations, the scenario production engine 206 may register a simulation scenario by generating a simulation identifier, assigning the simulation identifier to the simulation scenario, and storing the simulation scenario in the simulation data 212. For example, the simulation identifier may be a globally unique identifier (GUID). The simulation data 212 may be a database storing currently and previously available simulation scenarios indexed by their corresponding simulation identifiers. Further [0080]-[0081]), and wherein the trained machine-learning model is trained to output each original simulation to include the two or more specified attributes (see Wyrwas et al. para [0077], In some implementations, the platform file 500 includes a configuration file that defines input files, configured variations of targets in the augmented data, metadata tags to define attributes of added actors such as a number of pedestrians, and other information required to generate changes in state in the scenario. In some implementations, the scenario production engine 206 may register a simulation scenario by generating a simulation identifier, assigning the simulation identifier to the simulation scenario, and storing the simulation scenario in the simulation data 212. [0078], The simulator 208 may execute a simulation based on a selected simulation scenario. For example, the simulation scenario may correspond to a perception simulation scenario that imitates the operation of the perception subsystem 154 or a planning simulation scenario that imitates the operation of the planning subsystem 156 of the autonomous vehicle 100. In some implementations, the scenario production engine 206 sends a simulation identifier to the simulator 208. The simulator 208 uses the simulation identifier to fetch a configuration of a matching simulation scenario from the simulation data 212 and executes a simulation based on the fetched simulation scenario configuration.). Therefore, it would have been obvious to a person of skilled in the art to combine the method of Wyrwas et al. with that of Hakimi et al. because Wyrwas et al. teaches the improvement of accuracy of machine learning (see [0083]).
5.5 Regarding claims 5 and 13, the combined teachings of Hakimi et al. and Wyrwas et al. teaches that wherein the other attributes in the original simulations include at least one of a failed function of an autonomous vehicle control stack or an adjustment of the autonomous vehicle control stack while the autonomous vehicle control stack is navigating the original simulation scenarios (see Wyrwas et al. para [0006], For instance, the method further comprises executing a simulation based on the simulation scenario to generate a simulated output, providing the simulation scenario as a training input to the machine learning model to generate a predicted output of the machine learning model, and updating the machine learning model based on a difference between the predicted output and the simulated output of the simulation scenario. For instance, features may also include that generating the variation of the augmented data includes changing one or more actor characteristics associated with the actor, that the actor is associated with an actor characteristic, and the actor characteristic includes one from a group of actor velocity, actor type, actor size, actor geometric shape, actor reflectivity, actor color, actor path, lateral offset of motion of the actor, longitudinal offset of motion of the actor, and actor behavior response, that the generating a variation of the augmented data includes deleting the actor, adding an additional actor, or both. [0037] Sensors 130 may include various sensors suitable for collecting information from a vehicle's surrounding environment for use in controlling the operation of the vehicle 100. For example, sensors 130 can include RADAR sensor 134, LIDAR (Light Detection and Ranging) sensor 136, a 3D positioning sensor 138, e.g., a satellite navigation system such as GPS (Global Positioning System), GLONASS (Globalnaya Navigazionnaya Sputnikovaya Sistema, or Global Navigation Satellite System), BeiDou Navigation Satellite System (BDS), Galileo, Compass, etc.). Therefore, it would have been obvious to a person of skilled in the art to combine the method of Wyrwas et al. with that of Hakimi et al. because Wyrwas et al. teaches the improvement of accuracy of machine learning (see [0083]).
5.6 As per claims 6 and 14, the combined teachings of Hakimi et al. and Wyrwas et al. teaches the step of executing simulations using the plurality of original simulation scenarios for an autonomous vehicle control stack to navigate (see Wurwas et al. para [0037] Sensors 130 may include various sensors suitable for collecting information from a vehicle's surrounding environment for use in controlling the operation of the vehicle 100. [0113], The input may, in some implementations, be a simulation scenario and execution of the simulation scenario to generate a state or condition for the perception subsystem 154 or the planning subsystem 156. However, more generally the input may include simulated sensor data. In block 1830, simulation data is generated by executing the simulation scenario. In block 1835, a machine learning model 224 of the autonomous vehicle 100 is trained based at least in part on the simulation data. In block 1840, the trained machine learning model 224 is applied to control the autonomous vehicle 100.). Therefore, it would have been obvious to a person of skilled in the art to combine the method of Wyrwas et al. with that of Hakimi et al. because Wyrwas et al. teaches the improvement of accuracy of machine learning (see [0083]).
5.7 With regards to claims 7 and 15, the combined teachings of Hakimi et al. and Wyrwas et al. teaches the step of: failing one or more functions of the autonomous vehicle control stack or adding an adjustment of the autonomous vehicle control stack while the autonomous vehicle control stack is navigating each original simulation scenario (see Wyrwas et al. para [0006], For instance, the method further comprises executing a simulation based on the simulation scenario to generate a simulated output, providing the simulation scenario as a training input to the machine learning model to generate a predicted output of the machine learning model, and updating the machine learning model based on a difference between the predicted output and the simulated output of the simulation scenario. For instance, features may also include that generating the variation of the augmented data includes changing one or more actor characteristics associated with the actor, that the actor is associated with an actor characteristic, and the actor characteristic includes one from a group of actor velocity, actor type, actor size, actor geometric shape, actor reflectivity, actor color, actor path, lateral offset of motion of the actor, longitudinal offset of motion of the actor, and actor behavior response, that the generating a variation of the augmented data includes deleting the actor, adding an additional actor, or both. [0037] Sensors 130 may include various sensors suitable for collecting information from a vehicle's surrounding environment for use in controlling the operation of the vehicle 100. For example, sensors 130 can include RADAR sensor 134, LIDAR (Light Detection and Ranging) sensor 136, a 3D positioning sensor 138, e.g., a satellite navigation system such as GPS, or Global Navigation Satellite System), BeiDou Navigation Satellite System (BDS), Galileo, Compass, etc.).. Therefore, it would have been obvious to a person of skilled in the art to combine the method of Wyrwas et al. with that of Hakimi et al. because Wyrwas et al. teaches the improvement of accuracy of machine learning (see [0083]).
5.8 As per claims 8 and 16, the combined teachings of Hakimi et al. and Wyrwas et al. teaches that based on running the original simulation scenarios, determine a feature that needs improvement based on simulated responses by an autonomous vehicle (see Wyrwas et al. para [0006], In general, other aspects of the subject matter of this disclosure may be implemented in methods where generating the variation of the augmented data includes generating a plurality of sets of augmented data samples, and wherein generating the simulation scenario using the variation of the augmented data includes generating a plurality of simulation scenarios each one corresponding to one set of the plurality of sets of augmented data samples. For instance, the method may also include executing a simulation based on the simulation scenario to generate a simulated output, providing the simulation scenario as a training input to the machine learning model to generate a predicted output of the machine learning model, and updating the machine learning model based on a difference between the predicted output and the simulated output of the simulation scenario. For example, features may also include that the simulation scenario describes motion behavior of a simulated vehicle and at least one simulated actor. Still other implementations include an actor that is one from a group of another vehicle, a bicycle, a scooter, traffic, a pedestrian, an animal, machinery, debris, a static object, and a non-human moveable object. Further [0078], [0083], The simulation data may be repeatedly and iteratively used to improve the accuracy of the machine learning model 224 as represented by line 604 to and from the machine learning engine 166 in FIG. 6. Thus, the simulation data may also be used for re-training or refinement of the machine learning model 224. The improved machine learning model 224 can in turn be used by the perception subsystem 154, for example). Therefore, it would have been obvious to a person of skilled in the art to combine the method of Wyrwas et al. with that of Hakimi et al. because Wyrwas et al. teaches the improvement of accuracy of machine learning (see [0083]).
Conclusion
6. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
6.1 Malan (USPG_PUB PUB No. 2019/0244137) teaches a method and system is disclosed for training a machine learning model by generating first training input that includes a first number of reports at a first point in time.
6.2 Chu (USPG_PUB PUB No. 2023/0202507) teaches systems, methods, and computer-readable media for a control system for an autonomous vehicle (AV) simulator.
7. Claims 1-20 are rejected and this action is non-final. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANDRE PIERRE-LOUIS whose telephone number is (571)272-8636. The examiner can normally be reached M-F 9:00 AM-5:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, EMERSON C PUENTE can be reached at 571-272-3652. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ANDRE PIERRE LOUIS/Primary Patent Examiner, Art Unit 2187 February 3, 2026