DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 6, 7, 12, 17, 18 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kim et al. (US 2025/0216934) in view of Seo et al. (US 2020/0104670) further considered with Huang (US 2021/0113129).
In regard to claim 1, Kim et al. teach a method for evaluating and optimizing user experience in human-computer interaction, the method comprising: acquiring a multimodal biological signal of a user during a human-computer interaction process, the multimodal biological signal at least comprising eye movement data (fig. 3 element 310 gaze tracker); preprocessing the multimodal biological signal to extract a fixation position (paragraph 63), the fixation position being extracted based on an eye movement data feature corresponding to the eye movement data (element 310 and paragraph 63, gaze tracking) but does not teach index parameters of a plurality of dimensions at each time point; standardizing the index parameters of each of the plurality of dimensions, and obtaining, based on each standardized index parameter of each dimension and a predetermined weight for each index parameter, a user experience evaluation score of each dimension by means of a weighted average and associating the overall user experience evaluation result with the fixation position; visually presenting, based on the overall user experience evaluation result associated with the user experience evaluation results with the fixation position, user experience situation data of the user for a current fixation position on a human-computer interaction interface; and acquiring a correction parameter based on at least one of the user experience evaluation score of the user for each dimension or the visually presented user experience situation data of the user for different fixation positions on the human-computer interaction interface, and optimizing, based on the correction parameter, a weight for each index parameter of each dimension and a weight for the user experience evaluation score of each dimension for a finite number of iterations using a machine learning optimization algorithm.
Seo et al. teach index parameters of a plurality of dimensions at each time point (each dimension of the multimodal signal shown in fig. 4 has index parameters shown in fig. 6); standardizing the index parameters of each of the plurality of dimensions (fig. 6 elements 611-1, 612-1 and 613-1. Seo et al. teach the parameters of each dimension (audio, video, language) being numbers between 0-1), and obtaining, based on each standardized index parameter of each dimension and a predetermined weight for each index parameter, a user experience evaluation score of each dimension by means of a weighted average (fig. 6b element 621 and paragraphs 107 and 120. Seo et al. teach using applying weights to the audio, video and language data and using these dimensions to determine the emotion. Paragraph 120 states using an average of the models to determine the emotion); obtaining, based on the user experience evaluation score of each dimension and a predetermined weight for the user experience evaluation score, an overall user experience evaluation result by means of the weighted average (fig. 5A element 505 and paragraph 120, confidence values for each candidate emotion); and acquiring a correction parameter based on at least one of the user experience evaluation score of the user for each dimension (paragraph 126, user provides feedback based on the obtained emotion) or the visually presented user experience situation data of the user for different fixation positions on the human-computer interaction interface, and optimizing, based on the correction parameter, a weight for each index parameter of each dimension and a weight for the user experience evaluation score of each dimension for a finite number of iterations using a machine learning optimization algorithm (fig. 7(c), element 710, paragraphs 120 and 129. As the weights are adjusted the confidence value of each emotion changes. This confidence value is the user experience score) but does not teach visually presenting, based on the overall user experience evaluation result, user experience situation data of the user for a current fixation position on a human-computer interaction interface.
The two are analogous art because they both deal with the same field of invention of emotion analysis.
Before the effective filing date it would have been obvious to one of ordinary skill in the art to provide the apparatus of Kim et al. with the corrected model weights of Seo et al. The rationale is as follows: Before the effective filing date it would have been obvious to provide the apparatus of Kim et al. with the corrected model weights of Seo et al. because the iterative learning process of Seo et al. would allow for more accurate prediction and user customization.
Yildiz et al. teach associating the overall user experience evaluation result with the fixation position (fig. 6 element 602); visually presenting, based on the overall user experience evaluation result associated with the fixation position, user experience situation data of the user for a current fixation position on a human-computer interaction interface (element 610 and paragraph 90).
The three are analogous art because they both deal with the same field of invention of emotion analysis.
Before the effective filing date it would have been obvious to one of ordinary skill in the art to provide the apparatus of Kim et al. and Seo et al. with the visual feedback of Yildiz et al. The rationale is as follows: Before the effective filing date it would have been obvious to provide the apparatus of Kim et al. and Seo et al. with the visual feedback of Yildiz et al. because visual feedback is a widely-known method for providing user information. One of ordinary skill in the art would recognize using visual feedback in lieu of the feedback of Kim et al. and Seo et al. would work predictably and would provide the user with an easy to use feedback method.
Claims 12 and 20 are the system and computer readable medium corresponding to the method of claim 1. Claims 12 and 20 are rejected for the same reasons.
In regard to claims 6 and 17, Seo et al. teach wherein the machine learning optimization algorithm is a Newton method or a gradient descent method (paragraph 191); and said optimizing, based on the correction parameter, the weight for each index parameter of each dimension and the weight for the user experience evaluation score of each dimension for the finite number of iterations using the machine learning optimization algorithm comprises: building a machine learning optimization model by taking the weight for each index parameter of each dimension and the weight for the user experience evaluation score of each dimension as to-be-optimized parameters (fig. 7 pre-update and paragraph 128. The algorithm generates a user experience of happy from the provided weights but the feedback is neutral); constructing a loss function based on the correction parameter (paragraphs 128 and 129, the updater determines a need to change the weight values); and optimizing the machine learning optimization model for the finite number of iterations based on the loss function, and obtaining the optimized weight for each index parameter of each dimension and the optimized weight for the user experience evaluation score of each dimension (fig. 7 post-updated and paragraph 130).
In regard to claims 7 and 18, Yildiz et al. teach wherein said visually presenting, based on the overall user experience evaluation result associated with the fixation position, the user experience situation data of the user for the current fixation position on the human-computer interaction interface comprises: labeling at least one of different colors or transparency levels for the current fixation location based on the overall user experience evaluation result associated with the fixation position (paragraph 53, feedback may include flashing yellow on display).
Allowable Subject Matter
Claims 2-5, 8-11, 13-16 and 19 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is an examiner’s statement of reasons for allowance: in regard to claims 2-4 and 13-15, the prior art teaches the plurality of dimensions being video data, audio data, brainwave data as shown in fig. 4 of Seo et al. The prior art fails to teach or make obvious a fatigue degree, an emotional state dimension and a cognitive load dimension.
In regard to claims 5 and 16, the prior art fails to teach or make obvious the index parameters having positive and negative values. Figure 6 of Seo et al. teach the index parameters being numbers between 0 and 1.
In regard to claims 8-9, the prior art fails to teach or make obvious the subjective questionnaire in combination with the claim’s other features.
In regard to claim 10, the prior art fails to teach or make obvious “filling a blank region based on at least one of a color or transparency level of a fixation position around the blank region, to visually present the user experience situation data of the user for different regions on the human-computer interaction interface” in combination with the claim’s other features.
In regard to claims 11 and 19, the prior art fails to teach or make obvious the subjective questionnaire in combination with the claim’s other features.
Any comments considered necessary by applicant must be submitted no later than the payment of the issue fee and, to avoid processing delays, should preferably accompany the issue fee. Such submissions should be clearly labeled “Comments on Statement of Reasons for Allowance.”
Response to Arguments
Applicant’s arguments have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOSEPH R HALEY whose telephone number is (571)272-0574. The examiner can normally be reached 7:30am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amr Awad can be reached at 571-272-7764. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JOSEPH R HALEY/ Primary Examiner, Art Unit 2621