Prosecution Insights
Last updated: April 19, 2026
Application No. 18/948,357

METHOD FOR EVALUATING HUMAN-MACHINE INTERACTION OF VEHICLE, SYSTEM, EDGE COMPUTING DEVICE, AND MEDIUM

Non-Final OA §101§102§103
Filed
Nov 14, 2024
Examiner
BOROWSKI, MICHAEL
Art Unit
3624
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Kingfar International Inc.
OA Round
1 (Non-Final)
0%
Grant Probability
At Risk
1-2
OA Rounds
3y 0m
To Grant
0%
With Interview

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 12 resolved
-52.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
55 currently pending
Career history
67
Total Applications
across all art units

Statute-Specific Performance

§101
57.9%
+17.9% vs TC avg
§103
33.8%
-6.2% vs TC avg
§102
4.0%
-36.0% vs TC avg
§112
4.3%
-35.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 12 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections – 35 U.S.C. § 101 35 U.S.C. § 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. § 101 because the claimed invention is directed to non-statutory subject matter. The claims, 1-20 are directed to a judicial exception (i.e., law of nature, natural phenomenon, abstract idea) without providing significantly more. Step 1 Step 1 of the subject matter eligibility analysis per MPEP § 2106.03, required the claims to be a process, machine, manufacture or a composition of matter. Claims 1-20 are directed to a process (method), machine (system), and product/article of manufacture, which are statutory categories of invention. Step 2A Claims 1-20 are directed to abstract ideas, as explained below. Prong one of the Step 2A analysis requires identifying the specific limitation(s) in the claim under examination that the examiner believes recites an abstract idea, and determining whether the identified limitation(s) falls within at least one of the groupings of abstract ideas of mathematical concepts, mental processes, and certain methods of organizing human activity. Step 2A-Prong 1 The claims recite the following limitations that are directed to abstract ideas, which can be summarized as being directed to a method, the abstract idea, of analyzing a human-machine interface of a vehicle through use of virtual software and virtual tools that generate the human-machine interaction system and display the system to enable the tester to interact with it. Claim 1 discloses a method, comprising: A method for evaluating a human-machine interaction of a vehicle, the method comprising: acquiring, in a target driving test scenario, human-factor interaction data and external data, wherein the human-factor interaction data is generated based on an interaction between a tester and a human-machine interaction system of the vehicle, and the external data is associated with driving of the vehicle; (organizing human activity through following rules or instructions and mitigating risk: mental processes including observation, evaluation, judgement, opinion), and evaluating an interaction in the human-machine interaction system based on the human-factor interaction data and the external data to obtain evaluation data for the interaction. (following rules or instructions, mitigating risk, observation, evaluation, judgement, opinion). Additional limitations include evaluating the interaction based on data through processing data to obtain: a processed data, a preliminary result, and evaluation data for the interaction, (following rules or instructions, mitigating risk, observation, evaluation, judgement, opinion, claim 2), labeling the human-factor interaction data and the external data, dividing data segments of the human-factor interaction data and the external data, determining information about the human-machine interaction system in which the tester is interested, and performing smoothing process on the human-factor interaction data and the external data; and obtaining the preliminary evaluation result based on the preprocessing result, wherein the preliminary evaluation result comprises at least one of association information between the human-factor interaction data and the external data, and evaluation information for the interaction, (following rules or instructions, mitigating risk, observation, evaluation, judgement, opinion, claim 3), where evaluation data includes a score, evaluation rule data includes an index and score corresponding to the index, and processing the processed data and the preliminary evaluation result to obtain an index value corresponding to the evaluation index; and obtaining the evaluation score for the interaction based on the index value and the score corresponding to the evaluation index, (following rules or instructions, mitigating risk, observation, evaluation, judgement, opinion, claim 4), and acquiring subjective evaluation data for the interaction; and evaluating the interaction based on the human-factor interaction data, the external data, and the subjective evaluation data, (following rules or instructions, mitigating risk, observation, evaluation, judgement, opinion, claim 5), and assessing the evaluation rule data, wherein the evaluation rule data comprises at least one of basic evaluation data, experience evaluation data, and user-defined evaluation data, (following rules or instructions, mitigating risk, observation, evaluation, judgement, opinion, claim 6), assessing the interaction based on the evaluation rule data to obtain an objective assessment result; obtaining, a subjective assessment result for the interaction; and obtaining an assessment result for the evaluation rule data based on the objective assessment result and the subjective assessment result, (following rules or instructions, mitigating risk, observation, evaluation, judgement, opinion, claim 7), and selecting the target driving test scenario from a candidate driving test scenario; acquiring driving environment information of the target driving test scenario based on a data acquisition manner corresponding to the target driving test scenario; and displaying the driving environment information to enable the tester to control the vehicle to drive through interacting with the driving environment information, (following rules or instructions, mitigating risk, observation, evaluation, judgement, opinion, claim 8), acquiring updated driving environment information subsequent to the tester controlling the vehicle to drive through interacting based on the driving environment information; and displaying the updated driving environment information, (following rules or instructions, mitigating risk, observation, evaluation, judgement, opinion, claim 9), where the candidate driving test scenario comprises a (hybrid) driving test scenario, generating data and collecting data and a real driving test scenario, collecting data and receiving external data, (following rules or instructions, mitigating risk, observation, evaluation, judgement, opinion, claim 10), receiving editing data for the interaction, wherein the editing data is used to edit at visual content, and a display form of the interaction; to enable the tester to interact with the interaction, (following rules or instructions, mitigating risk, observation, evaluation, judgement, opinion, claim 11), where the interaction comprises inputs to the driver, human factor data from the driver, and external data associated with driving the vehicle as well as voice data, (following rules or instructions, mitigating risk, observation, evaluation, judgement, opinion, claim 12), where the tester is a driver and the method adds collecting human-factor interaction data of a driver of the vehicle, a driver state based on pieces of human-factor interaction data, including fatigue level and/or an emotional state of the evaluation driver; and obtaining experience evaluation data of a vehicle cabin based on human-factor interaction data and the driver state, the experience data used to assess interactivity between the vehicle cabin and the driver, (following rules or instructions, mitigating risk, observation, evaluation, judgement, opinion, claim 13), and synchronizing data collected for a same driver; wherein evaluation data of the vehicle cabin is based on the plurality of pieces of synchronized human-factor interaction data and the driver state, (following rules or instructions, mitigating risk, observation, evaluation, judgement, opinion, claim 14), where executing a test project management task, and/or a vehicle model management task, a tester management task, a primary tester management task, a resource library management task; generating an evaluation process and the corresponding evaluation content based on a time line; a data analysis task; and constructing an index system, obtaining corresponding weights of the human-factor interaction data and the driver state based on the index system, and performing task execution by utilizing the weights, (following rules or instructions, mitigating risk, observation, evaluation, judgement, opinion, claim 15), and where the test project management task comprises at least one of a project creation task, a personnel assignment task, a vehicle model association task, and a project progress presentation task; the vehicle model management task comprises at least one of a creation task and a management task of vehicle model information; the tester management task comprises at least one of a tester adding task, a tester demographic information adding task, a demographic information user-defined adding task, a tester history recording task, a tester import and export task, and a demographic information statistics task; the primary tester management task comprises at least one of a primary tester adding task and a primary tester history recording task; the resource library management task comprises one or more of a use case management subtask, a questionnaire scale management subtask, a behavior experiment management subtask, a journey management subtask, and a voice evaluation management subtask in a subjective evaluation task, and an evaluation tool management subtask and a state evaluation algorithm model management subtask in an objective evaluation task; the data analysis task comprises at least one of a behavior data analysis task, an eye movement data analysis task, a physiological data analysis task, a voice data analysis task, and a visual report task, (following rules or instructions, mitigating risk, observation, evaluation, judgement, opinion, claim 16), constructing an index system including any of a safety index, an efficiency index, and a pleasantness index; and an evaluation method for the index system, subjective evaluation and/or objective, the subjective evaluation comprises a questionnaire scale; and the objective evaluation comprises at least one of a physiological index, behavior data, and eye movement data, (following rules or instructions, mitigating risk, observation, evaluation, judgement, opinion, claim 17). Each of these claimed limitations employ: organizing human activity in the form of fundamental economic principles and practices based on mitigating risk or following rules or instructions; and performing mental processes including, observation, evaluation, judgement, and opinion. Claims 18-20 recite similar abstract ideas as those identified with respect to claims 1-17. Thus, the concepts set forth in claims 1-20 recite abstract ideas. Step 2A-Prong 2 As per MPEP § 2106.04, while the claims 1-20 recite additional limitations which are hardware or software elements such as an interaction element, a virtual software, a virtual sensor, a virtual driving test scenario, a virtual-real combination driving test scenario, a real sensor, an external device, human-machine interaction system, a vehicle Head Up Display system, an instrument panel, an instrument, a central control screen, a co-driver interaction device, and an entertainment screen; these limitations are not sufficient to qualify as a practical application being recited in the claims along with the abstract ideas since these elements are invoked as tools to apply the instructions of the abstract ideas in a specific technological environment. The mere application of an abstract idea in a particular technological environment and merely limiting the use of an abstract idea to a particular technological field do not integrate an abstract idea into a practical application (MPEP § 2106.05 (f) & (h)). Evaluated individually, the additional elements do not integrate the identified abstract ideas into a practical application. Evaluating the limitations as an ordered combination discloses an evaluation system providing data on the physical and physiological interactions occurring between a driver and a vehicle. The abstract ideas are integrated across a series of sensors and measurement interfaces where both subjective and objective data capture specific aspects of human control for operating a vehicle. Data are collected, processed, and evaluated to inform the development of driver-vehicle/machine interfaces, controls, and vehicle cabin. The claims describe a “practical application” of the abstract idea because they improve the technological field of human-machine interfaces for control of a complex system, in this case, a motor vehicle. Therefore, since the limitations in the claims 1-20 that transform the exception into a patent eligible application and are such that the claims amount to significantly more than the exception itself, the claims are directed to statutory subject matter and are not rejected under 35 U.S.C. § 101. Claim Rejections 35 U.S.C. §102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102(a)(1) that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-4, 8, 10-11, 13-14, 18-20 are rejected under 35 U.S.C. § 102 as being taught by Beal, (US 20230367692 A1), Systems and Methods for Testing and Analyzing Human Machine Interfaces,” Regarding Claim 1, A method for evaluating a human-machine interaction of a vehicle, Beal teaches, (systems and methods of evaluating control interfaces based on user interaction, [Abstract]). the method comprising: acquiring, in a target driving test scenario, human-factor interaction data and external data, wherein the human-factor interaction data is generated based on an interaction between a tester and a human-machine interaction system of the vehicle, Beal, (that paired data may be measured and/or collected from an individual driving an actual vehicle, [0031], parameters include one or more of: a total eyes off road time metric, a task completion time, a subtask completion time, or a performance score; calculating a plurality of parameters of the first user interaction, wherein the actual environment UI is a test UI of a vehicle, [0006]), and the external data is associated with driving of the vehicle; (The system uses its external environment cameras to correlate exterior environmental conditions, [0067]), and evaluating an interaction element in the human-machine interaction system based on the human-factor interaction data and the external data to obtain evaluation data for the interaction element, (this functionality could be used to evaluate a proposed new automotive interface, evaluate a software update to an existing automotive interface, or evaluate candidate automotive interfaces as compared to one another, [0036]). Claim 19 is rejected for reasons corresponding to those of claim 1. The addition of a device comprising a memory and processor does not change the rationale for rejecting the claim under 35 U.S.C. § 102. Beal teaches In some embodiments, a system comprising a processor coupled to a computer readable medium and memory is configured such that the processor executes the steps as shown in FIGS. 8-10, [0069]. Claim 20 is rejected for reasons corresponding to those of claim 1. The addition of a computer-readable storage medium does not change the rationale for rejecting the claim under 35 U.S.C. § 102. Beal teaches In some embodiments, a system comprising a processor coupled to a computer readable medium and memory is configured such that the processor executes the steps as shown in FIGS. 8-10, [0069]. Regarding Claim 2, The method according to claim 1, wherein said evaluating the interaction element in the human-machine interaction system based on the human-factor interaction data and the external data comprises: processing the human-factor interaction data and the external data to obtain processed data and a preliminary evaluation result; Beal teaches, (a computer-implemented method of evaluation a control interface based on user interaction, [ ], calculating a plurality of parameters of the first user interaction, [ ], and outputting an indication for the actual user interface, [0006]). and processing the processed data and the preliminary evaluation result based on evaluation rule data to obtain the evaluation data for the interaction element, (receiving, from one or more sensors, signals indicative of a first user interaction with the displayed simulated environment UI; calculating a plurality of parameters of the simulated environment UI based on the received signals indicative of the first user interaction, wherein the plurality of parameters include one or more of: a total eyes off road time metric, [ ], receiving, from one or more sensors, signals indicative of a second user interaction with the displayed actual environment UI; calculating a second plurality of parameters of the actual environment UI based on the received signals indicative of the second user interaction, wherein the second plurality of parameters include one or more of: a total eyes off road time metric, a task completion time, a subtask completion time, or a predictive score; comparing the first plurality of parameters to the second plurality of parameters; and outputting an indication for one or both of the simulated or actual environment UI, [0007]). Regarding claim 3, The method according to claim 2, wherein said processing the human-factor interaction data and the external data to obtain the processed data and the preliminary evaluation result comprises: preprocessing the human-factor interaction data and the external data to obtain a preprocessing result as the processed data, wherein the preprocessing result comprises at least one of labeling the human-factor interaction data and the external data, dividing data segments of the human-factor interaction data and the external data, Beal teaches, (processor functions may also be described as annotating. In other embodiments, such annotating may be performed manually by a user. Annotation of data acquired using the systems and methods described herein may be linked (e.g., based on reference tables, look-up tables, labeling, etc.) to other engineering documents and software and/or fed into machine learning algorithms for additional post-processing, [0046]), determining information about the human-machine interaction system in which the tester is interested, (the disclosure describes an ecosystem of data collection, structuring these data in an increasingly automated fashion, and providing state predictions. Data collection in-vehicle (actual environment) and in-home (simulated environment) is used to produce a pipeline of data correlated across dimensions of interest which feeds a system which can provide useful predictions to assist in automotive design, and in the prediction of the impact of automotive design, [0040], and performing smoothing process on the human-factor interaction data and the external data; (This system can be used by product teams to improve their products performance and safety, and the processing of data using the software, provides annotated data that can then be further processed into algorithms and/or fed into artificial intelligence systems, [0023]), and obtaining the preliminary evaluation result based on the preprocessing result, wherein the preliminary evaluation result comprises at least one of association information between the human-factor interaction data and the external data, and evaluation information for the interaction element, (the sensor data, video, and/or structured metadata may be fed into one or more machine learning models and/or algorithmic models configured to output one or more states of the UI and/or of the context (e.g., objects exterior to the vehicle, activity surrounding the vehicle, etc.).Various combinations of user parameters, user statistics, UI states, and context states may be combined to produce training data for machine learning models associating human and environmental characteristics with design, [0033], and a method 800 of evaluating a control interface includes: displaying, on any of the displays described herein, an actual environment user interface (UI) to a user at block 810;receiving, from any of the sensors described herein, signals indicative of a first user interaction with the displayed actual environment UI at block 820; calculating a plurality of parameters of the first user interaction with the actual environment UI based on the received signals indicative of the first user interaction at block 830; and outputting an indication for the actual user interface at block 840, [0070]). Regarding claim 4, The method according to claim 2, wherein: the evaluation data comprises an evaluation score, and the evaluation rule data comprises an evaluation index and a score corresponding to the evaluation index; and said processing the processed data and the preliminary evaluation result based on the evaluation rule data to obtain the evaluation data for the interaction element comprises: processing the processed data and the preliminary evaluation result to obtain an index value corresponding to the evaluation index; and obtaining the evaluation score for the interaction element based on the index value and the score corresponding to the evaluation index. Beal teaches, ( a computer-implemented method of evaluating control interfaces based on user interactions in simulated and actual environments, [ ], receiving, from one or more sensors, signals indicative of a first user interaction with the displayed simulated environment UI; calculating a plurality of parameters of the simulated environment UI based on the received signals indicative of the first user interaction, a total eyes off road time metric, and outputting an indication for one or both of the simulated or actual environment UI, [0007]). Regarding claim 8, The method according to claim 1, further comprising: selecting the target driving test scenario from a candidate driving test scenario; acquiring driving environment information of the target driving test scenario based on a data acquisition manner corresponding to the target driving test scenario; Beal teaches, (receiving, from one or more sensors, signals indicative of a first user interaction with the displayed actual environment UI; [ ] the actual environment UI based on the received signals indicative of the first user interaction, [0006]), and displaying the driving environment information in the human-machine interaction system, (a computer-implemented method of evaluation a control interface based on user interaction, including: displaying, on a display, an actual environment user interface (UI) to a user, wherein the actual environment UI is a test UI of a vehicle; [0006]), to enable the tester to control the vehicle to drive through interacting with the human-machine interaction system (wherein the actual environment UI is a test UI of a vehicle; [0006]), based on the driving environment information, (evaluating control interfaces based on user interactions in simulated and actual environments, [0007]). Regarding claim 10, The method according to claim 8, wherein the candidate driving test scenario comprises at least one of: a virtual driving test scenario, wherein a data acquisition manner corresponding to the virtual driving test scenario comprises at least one of a manner of generating data based on a virtual software and a manner of collecting data through a virtual sensor; a virtual-real combination driving test scenario, wherein a data acquisition manner corresponding to the virtual-real combination driving test scenario comprises at least one of the manner of generating data based on the virtual software, the manner of collecting data through the virtual sensor, a manner of collecting data through a real sensor, and a manner of receiving data from an external device; Beal teaches, (a computer-implemented method of evaluating control interfaces based on user interactions in simulated and actual environments, receiving, from one or more sensors, signals indicative of a first user interaction with the displayed simulated environment UI; receiving, from one or more sensors, signals indicative of a second user interaction with the displayed actual environment UI; [0007]). and a real driving test scenario, wherein a data acquisition manner corresponding to the real driving test scenario comprises at least one of the manner of collecting data through the real sensor and the manner of receiving data from the external device. Regarding claim 11, The method according to claim 1, further comprising: receiving editing data for the human-machine interaction system, wherein the editing data is used to edit at least one of a display content, an interaction manner, and a display form of the interaction element; Beal teaches, (the sensor data, video, and/or structured metadata may be fed into one or more machine learning models and/or algorithmic models configured to output one or more states of the UI and/or of the context (e.g., objects exterior to the vehicle, activity surrounding the vehicle, etc.). Various combinations of user parameters, user statistics, UI states, and context states may be combined to produce training data for machine learning models associating human and environmental characteristics with design, bidirectionally, [0033], and In some embodiments, the output from the system and/or one or more machine learning models is one or more of a candidate design or proposed design or updated design for a UI, [0036]), and generating the human-machine interaction system based on the editing data and displaying the human-machine interaction system, to enable the tester to interact with the human-machine interaction system. (The system may be further configured to test permutations of the one or more candidate or proposed designs and determine which permutation(s) lead to more desirable outcomes, as designed by the user. For example, this functionality could be used to evaluate a proposed new automotive interface, evaluate a software update to an existing automotive interface, or evaluate candidate automotive interfaces as compared to one another. The system may be configured to evaluate the impact of singular changes and/or aggregate changes. Said another way, statistical generalizations can be created that may generally hold true across a population of people, across vehicles from a particular manufacturer, etc., [0036]). Regarding claim 13, The method according to claim 1, wherein: the tester comprises an evaluation driver; and the method further comprises: collecting a plurality of pieces of human-factor interaction data of at least one evaluation driver of the vehicle based on a corresponding evaluation content and a driver state of the at least one evaluation driver identified based on the plurality of pieces of human-factor interaction data, wherein the driver state comprises a fatigue level and/or an emotional state of the evaluation driver; Beal teaches, (physiological sensors (e.g., electrodermal sensors, electroencephalography sensors, electromyography sensors), etc., [0042], one or more sensors may be integrated into a wearable worn by the user or in proximity to the user, for example eye tracking glasses, watches, sensorized mats or cushions for on seats, etc. The sensors may be configured to capture user interaction with the actual or simulated UI, the user interaction comprising one or more of: touch, gaze, voice, bodily movement, and posture, [0042]), and obtaining experience evaluation data of a vehicle cabin based on the plurality of pieces of human-factor interaction data and the driver state, wherein the experience evaluation data is used to assess interactivity between the vehicle cabin and the evaluation driver. Beal teaches, (the sensor data, video, and structured metadata may be fed into one or more machine learning models and/or algorithmic models configured to output one or more parameters or states of the user, and the sensor data, video, and/or structured metadata may be fed into one or more machine learning models and/or algorithmic models configured to output one or more states of the UI and/or of the context (e.g., objects exterior to the vehicle, activity surrounding the vehicle, etc.).Various combinations of user parameters, user statistics, UI states, and context states may be combined to produce training data for machine learning models associating human and environmental characteristics with design, bidirectionally, [0033]). Regarding claim 14, The method according to claim 13, further comprising: synchronizing the plurality of pieces of human interaction data collected for a same evaluation driver; wherein said obtaining the experience evaluation data of the vehicle cabin based on the plurality of pieces of human-factor interaction data and the driver state comprises obtaining the experience evaluation data of the vehicle cabin based on the plurality of pieces of synchronized human-factor interaction data and the driver state. Beal teaches, (the systems described herein may be configured such that paired data may be measured and/or collected from an individual driving an actual vehicle and using the interface of that vehicle and the same individual in-home, performing a driving proxy task in the same way while using the recorded interface of that vehicle. Ideally, paired data will be collected overtime for each individual in a variety of vehicles and a variety of driving contexts over time in either or both simulated and actual environments, [0031], and the sensor data, video, and structured metadata may be fed into one or more machine learning models and/or algorithmic models configured to output one or more parameters or states of the user, for example, a posture state, a fatigue state, a visual attentional parameter, an auditory attentional parameter, or a tactile attentional parameter, for example in terms of both the moment and longitudinal temporal characteristics of each. In some instances of the present invention, the sensor data, video, and/or structured metadata may be fed into one or more machine learning models and/or algorithmic models configured to output one or more states of the UI and/or of the context (e.g., objects exterior to the vehicle, activity surrounding the vehicle, etc.), [0033]). Regarding claim 18, An evaluation system, comprising: a deployment subsystem configured to edit a human-machine interaction system, Beal teaches, (the sensor data, video, and/or structured metadata may be fed into one or more machine learning models and/or algorithmic models configured to output one or more states of the UI and/or of the context (e.g., objects exterior to the vehicle, activity surrounding the vehicle, etc.). Various combinations of user parameters, user statistics, UI states, and context states may be combined to produce training data for machine learning models associating human and environmental characteristics with design, bidirectionally, [0033], and In some embodiments, the output from the system and/or one or more machine learning models is one or more of a candidate design or proposed design or updated design for a UI, [0036]), generate the human-machine interaction system, display the human-machine interaction system, (The system may be further configured to test permutations of the one or more candidate or proposed designs and determine which permutation(s) lead to more desirable outcomes, as designed by the user. For example, this functionality could be used to evaluate a proposed new automotive interface, evaluate a software update to an existing automotive interface, or evaluate candidate automotive interfaces as compared to one another. The system may be configured to evaluate the impact of singular changes and/or aggregate changes. Said another way, statistical generalizations can be created that may generally hold true across a population of people, across vehicles from a particular manufacturer, etc., [0036]). select a target driving test scenario, acquire driving environment information of the target driving test scenario, and display the driving environment information on the human-machine interaction system; (Beal teaches, (receiving, from one or more sensors, signals indicative of a first user interaction with the displayed actual environment UI; [ ] the actual environment UI based on the received signals indicative of the first user interaction, [0006]), and displaying, on a display, an actual environment user interface (UI) to a user, wherein the actual environment UI is a test UI of a vehicle; [0006]), an evaluation subsystem configured to perform a method for evaluating a human-machine interaction of a vehicle, the method comprising: acquiring, in a target driving test scenario, human-factor interaction data and external data, wherein the human-factor interaction data is generated based on an interaction between a tester and a human-machine interaction system of the vehicle, (that paired data may be measured and/or collected from an individual driving an actual vehicle, [0031], parameters include one or more of: a total eyes off road time metric, a task completion time, a subtask completion time, or a performance score; calculating a plurality of parameters of the first user interaction, wherein the actual environment UI is a test UI of a vehicle, [0006]), and the external data is associated with driving of the vehicle; and evaluating an interaction element in the human-machine interaction system based on the human-factor interaction data and the external data to obtain evaluation data for the interaction element; (The system uses its external environment cameras to correlate exterior environmental conditions, [0067], this functionality could be used to evaluate a proposed new automotive interface, evaluate a software update to an existing automotive interface, or evaluate candidate automotive interfaces as compared to one another, [0036]), and a data communication module configured to send the human-factor interaction data and the external data from the deployment subsystem to the evaluation subsystem, Beal teaches, (the computer readable medium and memory coupled to the processor may be local (e.g., in the vehicle)such that it may capture the data from the sensors (e.g., cameras, eye tracking sensors, etc.). In some embodiments, as shown in FIG. 1, the data stored on the local computer readable medium and memory may be wirelessly transmitted to a remote server 90 (e.g., Cloud) in real-time or on demand during data acquisition or after data acquisition is complete. Alternatively, the data may be wirelessly transmitted to a remote server 90 without any local storage, [0041] and FIG.1. Claim Rejections 35 U.S.C. §103 The following is a quotation of 35 U.S.C. § 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 5 is rejected under 35 U.S.C. § 103 as being taught by Beal, (US 20230367692 A1), hereafter Beale, “Systems and Methods for Testing and Analyzing Human Machine Interfaces,” in view of Miao, (CN 114218290 A), hereafter Miao, “Selecting Method Of Equipment Man-machine Interactive Interface Availability Evaluation.” Regarding claim 5, The method according to claim 1, wherein said evaluating the interaction element in the human-machine interaction system based on the human-factor interaction data and the external data comprises: acquiring subjective evaluation data for the interaction element in the human-machine interaction system; Beal does not teach, Miao teaches, (S3, based on human-computer interaction interface model, selecting the most suitable method for device human-machine interaction interface availability evaluation, [ ], S32, availability evaluation index selecting, starting from satisfying the availability requirement, selecting and determining the evaluation index, establishing evaluation index system, such as target feasibility, learnability, memorability, easy use, fault tolerance, interface aesthetic, subjective satisfaction index, if the evaluation index system covers N indexes, then establishing the evaluation index system index set I = (i1, i2, ..., iN)..and evaluating the interaction element in the human-machine interaction system based on the human-factor interaction data, the external data, and the subjective evaluation data. (S6, finishing the evaluation data analysis. The evaluation data analysis is mainly for analyzing and sorting the various result data obtained in the evaluation process, mainly comprising: operation time of the user operation, error rate and other objective performance data; user subjective evaluation data; recording the data by the user error condition; user open evaluation data and so on. if the evaluation data relates to subjective and objective evaluation index data, and needs to give comprehensive evaluation result, then calculating and determining the index weight, finally obtaining the final score of the evaluation object availability evaluation. Further, according to the evaluation data analysis also can form evaluation report, and feedback the related unit, so as to develop the subsequent design optimization work, Miao, [p. 11]). Beal and Miao are both considered to be analogous to the claimed invention because they are both in the field of human-machine interaction analysis. It would have been obvious to one of ordinary skill in the art before the effective filing date to combine the interaction analysis methods of Beal with the subjective data sources of Miao to develop the design optimization work, [p.11]. Claims 6-7 are rejected under 35 U.S.C. § 103 as being taught by Beal, (US 20230367692 A1), hereafter Beale, “Systems and Methods for Testing and Analyzing Human Machine Interfaces,” in view of Sun, (US 20230271633 A1), hereafter Sun, “Safety Control Method And Apparatus For Autonomous Driving Assistance System,” in further view of Wang, (US 20230245651 A1), hereafter Wang, “Enabling User-Centered and Contextually Relevant Interaction.” Regarding claim 6, The method according to claim 2, further comprising: assessing the evaluation rule data, wherein the evaluation rule data comprises at least one of basic evaluation data, experience evaluation data, and user-defined evaluation data. Beal does not teach, Sun teaches, (With fast development of intelligent connected vehicles and autonomous vehicles, designing highly reliable and safe vehicle electronic systems is attracting increasing attention from various parties, and functional safety and the safety of the intended functionality are indispensable to system design of autonomous vehicles. ISO 26262 and ISO DIS 21448 are industry standards for functional safety and the safety of the intended functionality of automotive electronic/electrical systems. The functional safety refers to “the absence of unreasonable risk due to hazards caused by malfunctioning behavior of electronic/electrical systems”. That is, the functional safety focuses on whether the system, after systematic failures, can enter a safe state to avoid greater hazards, or reduce the probability of occurrence of hazards by means of safety measures, rather than the original function or performance of the system, [0003]. Beal and Sun are both considered to be analogous to the claimed invention because they are both in the field of human-machine interaction analysis. It would have been obvious to one of ordinary skill in the art before the effective filing date to combine the interaction analysis methods of Beal with the standards adherence of Sun to ensure “the absence of unreasonable risk due to hazards caused by functional insufficiencies of the intended functionality or by foreseeable misuse by persons.” [0003]. Regarding claim 7, The method according to claim 6, wherein said assessing the evaluation rule data comprises: assessing, in another driving test scenario different from the target driving test scenario, the interaction element in the human-machine interaction system based on the evaluation rule data to obtain an objective assessment result; Beal does not teach, Wang teaches, (When encountering a new scenario, the AI system analyzes the context using pattern recognition, NLP, or other techniques 403 to identify which rule(s) or law(s) might be applicable, [0179]). obtaining, in the another driving test scenario, a subjective assessment result for the interaction element in the human-machine interaction system; (AI agent state modules can be employed to activate or deactivate agents as needed. The AI system also includes performance measurement features that assess AI agent and user interaction using quantitative or qualitative metrics like satisfaction scores, response time, accuracy in response, and effective service hours, [0262]), and obtaining an assessment result for the evaluation rule data based on the objective assessment result and the subjective assessment result. (the AI system analyzes the context using pattern recognition, NLP, or other techniques 403 to identify which rule(s) or law(s) might be applicable. Relevant algorithm(s) or script(s) are retrieved from the rules engine 404 based on the recognized context, and then applied to the given scenario 405. This process may involve adjusting parameters or customizing the script to suit the specific situation [0179]. As the AI system encounters new scenarios and receives feedback on its performance, it should continue to learn 406 and update its algorithms and scripts stored in the rules engine 407, leading to improved accuracy and adaptability over time, [0180]) and the AI system may consider the user’s interaction history, preferences, and other contextual information to further refine the evaluation, [0286]. Beal and Wang are both considered to be analogous to the claimed invention because they are both in the field of human-machine interaction analysis. It would have been obvious to one of ordinary skill in the art before the effective filing date to combine the interaction analysis methods of Beal with the rule analysis of Wang to store the systems’ knowledge and enable access and application of information when needed, [0178]. Claim 9 is rejected under 35 U.S.C. § 103 as being taught by Beal, (US 20230367692 A1), hereafter Beale, “Systems and Methods for Testing and Analyzing Human Machine Interfaces,” in view of Kim, (US 20210389144 A1), hereafter Kim, “User Interfaces for Customized Navigation Routes.” Regarding claim 9, The method according to claim 8, further comprising: acquiring updated driving environment information subsequent to the tester [0058] controlling the vehicle to drive through interacting with the human-machine interaction system based on the driving environment information; and Beal does not teach, Kim teaches, (user input 603y is received selecting link 694 corresponding to a request to display more information about the driving restriction, [0251]), displaying the updated driving environment information in the human-machine interaction system, (in response to user input 603y,device 500 updates user interface 607 to display information about the driving restriction, as shown in FIG. 6Z, [0251]). Beal and Kim are both considered to be analogous to the claimed invention because they are both in the field of human-machine interaction analysis. It would have been obvious to one of ordinary skill in the art before the effective filing date to combine the interaction analysis methods of Beal with the information update techniques of Kim to provide the user with information, but allows the user to make the decision, [0256]). Claim 12 is rejected under 35 U.S.C. § 103 as being taught by Beal, (US 20230367692 A1), hereafter Beale, “Systems and Methods for Testing and Analyzing Human Machine Interfaces,” in view of Wang, Wei-li, (CN 202010163702 A), hereafter Wang, Wei-li; “A Vehicle Human Machine Interface to Standard Evaluation Method.” Regarding claim 12, The method according to claim 1, wherein: the human-machine interaction system comprises at least one of a vehicle Head Up Display system, an instrument panel, an instrument, a central control screen, a co-driver interaction device, and an entertainment screen; Beal does not teach, Wang, Wei-li teaches, (The external control system and instrument system, it further comprises but not limited to head-up display system, [p.11], instrument panel, entertainment screen, automobile instrument, p.6]), the human-factor interaction data comprises at least one of physiological data of the tester, eye movement data of the tester, electroencephalogram data of the tester, hand operation track data of the tester, motion posture data of the tester, face data of the tester, and voice data of the tester; Beal teaches, (The sensors employed in the system may be [ ], physiological sensors (e.g., electroencephalography sensors), [0042], eye tracking sensors may also be installed in the dash as a ‘hard install’ to the vehicle, [0057], a user look, a user voice, a user bodily movement (arm movement, shoulder movement, etc.), or a user posture, [0085]), and the external data associated with the driving of the vehicle comprises at least one of driving data of the vehicle, Beal teaches, (the systems described herein may be configured such that paired data may be measured and/or collected from an individual driving an actual vehicle, [0031]), external traffic data, external environment data, (The distraction UI may be configured to display one or more of: simulated weather conditions, simulated road conditions, or simulated location conditions, although this list is non-limiting and may include any conditions (weather, traffic, or otherwise) that a user may encounter while driving, [0092]), and voice data of the human-machine interaction system, (the systems described herein are configured to capture user interaction data, including touch, voice, [0026]). Beal and Wang, We-li; are both considered to be analogous to the claimed invention because they are both in the field of human-machine interaction analysis. It would have been obvious to one of ordinary skill in the art before the effective filing date to combine the interaction analysis components of Beal with the hardware components of Wang, Wei-li; which can greatly improve the accuracy, objectivity and facticity of machine interface evaluation, providing data and guiding suggestion accurately and efficiently the label analysis and evaluation is vehicle human-machine interface, [Abstract]. Claims 15-17 are rejected under 35 U.S.C. § 103 as being taught by Beal, (US 20230367692 A1), hereafter Beale, “Systems and Methods for Testing and Analyzing Human Machine Interfaces,” in view of Official Notice. Regarding claim 15, The method according to claim 13, wherein said collecting the plurality of pieces of human-factor interaction data of the at least one evaluation driver of the vehicle based on the corresponding evaluation content and the driver state of the at least one evaluation driver identified based on the plurality of pieces of human-factor interaction data comprises: executing at least one of a test project management task, a vehicle model management task, a tester management task, and a primary tester management task; executing a resource library management task; generating an evaluation process and the corresponding evaluation content based on a time line; executing a data analysis task based on the plurality of pieces of human-factor interaction data and the driver state; and constructing an index system, obtaining corresponding weights of the plurality of pieces of human-factor interaction data and the driver state based on the index system, and performing task execution by utilizing the weights. The specification recites this claim at [0019] and [00122], without disclosing how these basic engineering and analytical constructs provide any novel or innovative element to human-machine interaction analysis. The Examiner is taking Official Notice of the well- understood, routine, conventional nature of these additional elements, (MPEP § 2106.07(a) III). Regarding claim 16, The method according to claim 15, wherein: the test project management task comprises at least one of a project creation task, a personnel assignment task, a vehicle model association task, and a project progress presentation task; the vehicle model management task comprises at least one of a creation task and a management task of vehicle model information; the tester management task comprises at least one of a tester adding task, a tester demographic information adding task, a demographic information user-defined adding task, a tester history recording task, a tester import and export task, and a demographic information statistics task; the primary tester management task comprises at least one of a primary tester adding task and a primary tester history recording task; the resource library management task comprises one or more of a use case management subtask, a questionnaire scale management subtask, a behavior experiment management subtask, a journey management subtask, and a voice evaluation management subtask in a subjective evaluation task, and an evaluation tool management subtask and a state evaluation algorithm model management subtask in an objective evaluation task; the data analysis task comprises at least one of a video behavior data analysis task, an eye movement data analysis task, a physiological data analysis task, a voice data analysis task, and a visual report task. The specification recites this claim at [0020], [0123], [00157], [00122], without disclosing any novel or innovative element added to these basic engineering and analytical constructs for human-machine interaction analysis. The Examiner is taking Official Notice of the well-understood, routine, conventional nature of these additional elements, (MPEP § 2106.07(a) III). Regarding claim 17, The method according to claim 15, wherein: said constructing the index system comprises constructing the index system based on composition elements of a human-machine-environment system, wherein the index system comprises at least one of a safety index, an efficiency index, and a pleasantness index; an evaluation method for the index in the index system comprises subjective evaluation and/or objective evaluation, wherein the subjective evaluation comprises a questionnaire scale; and the objective evaluation comprises at least one of a physiological index, behavior data, and eye movement data. The specification recites this claim at [0023], [00154], [00155], without disclosing any novel or innovative element added to these basic engineering and analytical constructs for human-machine interaction analysis. The Examiner is taking Official Notice of the well-understood, routine, conventional nature of these additional elements, (MPEP § 2106.07(a) III). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure or directed to the state of the art is listed on the enclosed PTO-892. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL BOROWSKI whose telephone number is (703)756-1822. The examiner can normally be reached M-F 8-4:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jerry O’Connor can be reached on (571) 272-6787. The fax phone number for the organization where this application or proceeding is assigned is (571) 273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at (866) 217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call (800) 786-9199 (IN USA OR CANADA) or (571) 272-1000. /MB/ Patent Examiner, Art Unit 3624 /MEHMET YESILDAG/Primary Examiner, Art Unit 3624
Read full office action

Prosecution Timeline

Nov 14, 2024
Application Filed
Mar 05, 2026
Non-Final Rejection — §101, §102, §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
0%
Grant Probability
0%
With Interview (+0.0%)
3y 0m
Median Time to Grant
Low
PTA Risk
Based on 12 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month