Prosecution Insights
Last updated: April 19, 2026
Application No. 18/851,582

EVALUATION METHOD, EVALUATION DEVICE, AND PROGRAM

Final Rejection §103
Filed
Jan 02, 2025
Examiner
HONG, RICHARD J
Art Unit
2623
Tech Center
2600 — Communications
Assignee
Shimadzu Corporation
OA Round
2 (Final)
78%
Grant Probability
Favorable
3-4
OA Rounds
2y 0m
To Grant
82%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
459 granted / 589 resolved
+15.9% vs TC avg
Minimal +4% lift
Without
With
+4.4%
Interview Lift
resolved cases with interview
Fast prosecutor
2y 0m
Avg Prosecution
35 currently pending
Career history
624
Total Applications
across all art units

Statute-Specific Performance

§101
1.6%
-38.4% vs TC avg
§103
58.4%
+18.4% vs TC avg
§102
22.9%
-17.1% vs TC avg
§112
8.5%
-31.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 589 resolved cases

Office Action

§103
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1, 3-5, 10-12, 15-18, 20 and 24 are pending. Response to Amendment Applicants’ response to the last Office Action, dated Jan. 9, 2026 has been entered and made of record. In view of Applicant’s amendment for title and abstract, the objection to the specification have been expressly withdrawn. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office Action. Response to Arguments Applicant’s arguments, dated Jan. 9, 2026 have been considered but are moot because the arguments do not apply to all of the references being used in the current rejection. Please see the following claim rejections for detailed analysis. Claim Rejections - 35 USC § 103 The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office Action. Claim 24 is rejected under 35 U.S.C. 103 as being unpatentable over Hirasawa et al. (JP 2007272570 A, IDS, hereinafter English translation by Clarivate Analytics) in view of Atsumori et al. (US 2024/0107439 A1, provided in prior Office Action). As to claim 24, Hirasawa teaches an evaluation method (Hirasawa, Abs., a method of “reproducing a comment of an observer about an operation of a subject in a usability evaluation experiment”) comprising: obtaining information about a task performed by a subject (Hirasawa, e.g., FIG. 12, “a screen SH11 shown in FIG. 12 is displayed to perform post-task asking. This post-processing screen SH11 is a list of video thumbnails, task items, time values, operation items, comments, utterance memos, and markings at each event occurrence time using the elapsed time from the start of the experiment”); determining, while the subject performs the task, a timing at which an input of a marker is received from at least one of the subject performing the task and an observer observing the subject (Hirasawa, e.g., FIGS. 14-15, “If an observer registers an event using the system for each event occurrence, the task items, operation details, and comments are listed while synchronizing with the subject's experiment video using the occurrence time of the event item as a key”); outputting a portion of the information about the task corresponding to the determined timing (Hirasawa, FIGS. 14-15, “when playing back the content of the experiment after the experiment is completed, a list can be created and displayed for each event item that also contrasts the thumbnails of the subject's video at the time of occurrence”); generating a screen for input allowing the subject to input feedback thereon on the task; and receiving an input of the feedback on the task from the subject via the screen for input (Hirasawa, FIG. 4, “Step ST3: In post-asking, while the informant (subject) and the observer reproduce the test recording video and view them together, the observer performs asking regarding the test to the subject and inputs it to the asking recording sheet”; it is reasonably inferred that the “informant (subject)” may input the answer on the same sheet on the screen in response to the “asking recording sheet”). Hirasawa does not explicitly teach “wherein the information about the task includes at least one of: an image provided to the subject while the task is performed; audio provided to the subject while the task is performed; a composition of air recorded while the task is performed; information specifying an odor to be reproduced based on a substance specified by the image provided to the subject and/or the audio provided to the subject; a feel provided to the subject while the task is performed; information specifying a feel to be reproduced based on a substance specified by the image provided to the subject and/or the audio provided to the subject; a taste provided to the subject while the task is performed; information specifying a taste to be reproduced based on a substance specified by the image provided to the subject and/or the audio provided to the subject; two or more of the image provided to the subject, the audio provided to the subject, the composition of air, the information specifying the odor, the feel provided to the subject, the information specifying the feel, the taste provided to the subject, and the information specifying a taste; and a screen of a game or a progress of the game at the time of the timing”. However, Atsumori teaches the concept that wherein the information about the task includes at least one of: an image provided to the subject while the task is performed (Atsumori, e.g., see FIGS. 5-6, [0037], “memorization image (S1)” and “recognition image (S2)”). At the time of effective filing date, it would have been obvious to one of ordinary skill in the art to modify the method of “reproducing a comment of an observer about an operation of a subject in a usability evaluation experiment” taught by Hirasawa to further comprise the step of providing an image such as the “memorization image (S1)”, e.g., user’s manual or any other instruction related to the specific experiment, to the test subject when performing the “usability evaluation experiment”, as taught by Atsumori, in order to improve the “usability evaluation experiment”. Claims 1, 3-5, 10-12, 15-18 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Hirasawa et al. (JP 2007272570 A, IDS) in view of Sazuka (US 2024/0161543 A1) and Atsumori et al. (US 2024/0107439 A1, provided in prior Office Action). As to claim 1, Hirasawa teaches an evaluation method (Hirasawa, Abs., a method of “reproducing a comment of an observer about an operation of a subject in a usability evaluation experiment”) comprising: obtaining information about a task performed by a subject (Hirasawa, e.g., FIG. 12, “a screen SH11 shown in FIG. 12 is displayed to perform post-task asking. This post-processing screen SH11 is a list of video thumbnails, task items, time values, operation items, comments, utterance memos, and markings at each event occurrence time using the elapsed time from the start of the experiment”); outputting a portion of the information about the task corresponding to the determined timing (Hirasawa, e.g., FIG. 12, “a screen SH11 shown in FIG. 12 is displayed to perform posttask asking. This post-processing screen SH11 is a list of video thumbnails, task items, time values, operation items, comments, utterance memos, and markings at each event occurrence time using the elapsed time from the start of the experiment”); and generating a screen for input allowing the subject to input feedback thereon on the task; and receiving an input of the feedback on the task from the subject via the screen for input (Hirasawa, FIG. 4, “Step ST3: In post-asking, while the informant (subject) and the observer reproduce the test recording video and view them together, the observer performs asking regarding the test to the subject and inputs it to the asking recording sheet”; it is reasonably inferred that the “informant (subject)” may input the answer on the same sheet on the screen in response to the “asking recording sheet”). Hirasawa does not teach “obtaining biological information of the subject performing the task over time; determining a timing at which the biological information satisfies a predetermined condition”. However, Sazuka teaches the concept of obtaining biological information of the subject performing the task over time (Sazuka, FIG. 7, [0192], “time series data is derived as the arousal level 24e, and the derived time series data is stored, in the storage section 24, in association with the identifier 24d of the person to be evaluated”); and determining a timing at which the biological information satisfies a predetermined condition (Sazuka 1, FIG. 4, [0077], e.g., “when potential of facial muscles in a predetermined part is measured and a thus-obtained measurement value is higher than a predetermined threshold, it is possible to estimate whether the arousal level of the target living body is high or low”). At the time of effective filing date, it would have been obvious to one of ordinary skill in the art to modify the system for “useability experiment” taught by Hirasawa to further obtain the “arousal level 24e” by measuring, e.g., the “potential of facial muscles”, as taught by Sazuka, in order to increase the accuracy of the usability evaluation experiment by further observing the subject’s “cognitive capacity of the user” in association with “task difference Δtv in the dispersion of the reaction times” (Sazuka, [0108]). Hirasawa in view of Sazuka does not explicitly teach “wherein the information about the task includes at least one of: an image provided to the subject while the task is performed; audio provided to the subject while the task is performed; a composition of air recorded while the task is performed; information specifying an odor to be reproduced based on a substance specified by the image provided to the subject and/or the audio provided to the subject; a feel provided to the subject while the task is performed; information specifying a feel to be reproduced based on a substance specified by the image provided to the subject and/or the audio provided to the subject; a taste provided to the subject while the task is performed; information specifying a taste to be reproduced based on a substance specified by the image provided to the subject and/or the audio provided to the subject; two or more of the image provided to the subject, the audio provided to the subject, the composition of air, the information specifying the odor, the feel provided to the subject, the information specifying the feel, the taste provided to the subject, and the information specifying a taste; and a screen of a game or a progress of the game at the time of the timing”. However, Atsumori teaches the concept that wherein the information about the task includes at least one of: an image provided to the subject while the task is performed (Atsumori, e.g., see FIGS. 5-6, [0037], “memorization image (S1)” and “recognition image (S2)”). At the time of effective filing date, it would have been obvious to one of ordinary skill in the art to modify the method of “reproducing a comment of an observer about an operation of a subject in a usability evaluation experiment” taught by Hirasawa to further comprise the step of providing an image such as the “memorization image (S1)”, e.g., user’s manual or any other instruction related to the specific experiment, to the test subject when performing the “usability evaluation experiment”, as taught by Atsumori, in order to improve the “usability evaluation experiment”. As to claim 3, Sazuka teaches the evaluation method according to claim 1, wherein the predetermined condition includes at least one of: that a feature value in the biological information reaches a given threshold value (Sazuka, FIG. 4, [0077], e.g., “when potential of facial muscles in a predetermined part is measured and a thus-obtained measurement value is higher than a predetermined threshold, it is possible to estimate whether the arousal level of the target living body is high or low”); and that an amount by which the feature value in the biological information changes continues to have a given value or smaller for a prescribed period of time or longer. Examiner renders the same motivation as in claim 1. As to claim 4, Hirasawa teaches the evaluation method according to claim 1, wherein the predetermined condition includes that a feature value in the biological information is maintained within a given range for a given period of time or longer (Hirasawa, FIG. 3, [0160], “the duration of the arousal level indicates, for example, a period (duration Δt1) in which the arousal level is maintained in a high state, as illustrated in FIG. 3”). As to claim 5, Hirasawa teaches the evaluation method according to claim 1, further comprising receiving an input of a marker from at least one of the subject and an observer observing the subject while the subject performs a task (Hirasawa, e.g., FIGS. 14-15, “If an observer registers an event using the system for each event occurrence, the task items, operation details, and comments are listed while synchronizing with the subject's experiment video using the occurrence time of the event item as a key”), wherein the outputting includes providing a portion of the information about the task corresponding to a time point at which the marker is put (Hirasawa, FIGS. 14-15, “when playing back the content of the experiment after the experiment is completed, a list can be created and displayed for each event item that also contrasts the thumbnails of the subject's video at the time of occurrence”). As to claim 10, Hirasawa teaches the evaluation method according to claim 1, wherein the portion of the information output includes at least one of an image, an audio, an odor, a feel, and a taste (Hirasawa, FIG. 3, “the test log synchronization recording unit 12 uses the video capture 7 … the entire image of the subject, the image of the finger during operation, the speech voice uttered by the subject, and the test observation recording data and the asking recording data are recorded together with the time information at the time of input”). As to claim 11, it differs from claim 1 only in that it is the evaluation method of claim 1, further comprising the step that “the obtaining includes obtaining the information about the task when a determined period of time elapses while the task is performed”. It recites substantially the same limitations as in claim 1, and Hirasawa in view of Sazuka and Atsumori teaches them, and Atsumori further teaches the concept that “the obtaining includes obtaining the information about the task when a determined period of time elapses while the task is performed (Atsumori, e.g., FIGS. 5-6, “rest for 16-22 seconds”, “S1 memorization image for 1.5 seconds”, etc.)”. Examiner renders the same motivation as in claim 1. Please also see claim 1 for detailed analysis. As to claim 12, Hirasawa teaches the evaluation method according to claim 1, wherein the portion of the information output corresponds to the determined timing and a given period of time before and after the determined timing (Hirasawa, FIG. 11, e.g., “post-processing screen SH11 is a list of video thumbnails, task items, time values, operation items, comments, utterance memos, and markings at each event occurrence time using the elapsed time from the start of the experiment”). As to claim 15, Sazuka teaches the evaluation method according to claim 1, wherein the outputting includes: determining a state of the subject by analyzing the portion of the information output; and outputting the state of the subject (Sazuka, FIG. 3, [0160], e.g., “the duration Δt1 is an index related to durability of concentration, and longer duration Δt1 indicates having an ability to maintain high concentration longer. The rise time Δt2 is an index related to quickness of on/off switching, and shorter rise time Δt2 indicates an ability to more quickly concentrate on a work”). Examiner renders the same motivation as in claim 1. As to claim 16, Sazuka teaches the evaluation method according to claim 15, wherein the outputting includes outputting information urging that a degree of the state of the subject be input, and the receiving includes obtaining an input of the degree of the state of the subject (Sazuka, FIG. 3, [0160], e.g., “the duration Δt1 is an index related to durability of concentration, and longer duration Δt1 indicates having an ability to maintain high concentration longer. The rise time Δt2 is an index related to quickness of on/off switching, and shorter rise time Δt2 indicates an ability to more quickly concentrate on a work”). As to claim 17, Sazuka teaches the evaluation method according to claim 1, wherein the outputting includes, when a first timing (Hirasawa, FIG. 12, e.g., “time 0:03”) and a second timing (Hirasawa, FIG. 12, e.g., “time 0:15”) are determined as the timing, providing a portion of the information corresponding to the first timing, and together therewith, a portion of the information corresponding to the second timing (Hirasawa, see FIG. 12). As to claim 18, Sazuka teaches the evaluation method according to claim 1, wherein the outputting includes, when a first timing (Hirasawa, FIG. 12, e.g., “time 0:03”) and a second timing (Hirasawa, FIG. 12, e.g., “time 0:15”) are determined as the timing, outputting a portion of the information corresponding to the first timing and thereafter providing a portion of the information corresponding to the second timing (Hirasawa, see FIG. 12). As to claim 20, it differs from claim 1 only in that it is the evaluation apparatus performing the evaluation method of claim 1. It recites the similar limitations as in claim 1, and Hirasawa in view of Sazuka and Atsumori teaches them. Examiner renders the same motivation as in claim 1. Please see claim 1 for detailed analysis. Conclusion The prior arts made of record and not relied upon are considered pertinent to applicant’s disclosure: Gerken III (US 2013/0337421 A1) teaches the concept of “identifying emotions and notifying a user that may otherwise have difficulty identifying the emotions displayed by others” (Abs.); and Nihonyanagi et al. (US 2022/0415086 A1) teaches the concept of “an emotion estimation unit” (Abs.). Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to RICHARD J HONG whose telephone number is (571) 270-7765. The examiner can normally be reached on 9:00 AM to 6:00 PM EST. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chanh Nguyen can be reached on (571) 272-7772. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. Mar. 4, 2026 /RICHARD J HONG/Primary Examiner, Art Unit 2623 ***
Read full office action

Prosecution Timeline

Jan 02, 2025
Application Filed
Sep 05, 2025
Non-Final Rejection — §103
Jan 09, 2026
Response Filed
Mar 04, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596398
FLEXIBLE ELECTRONIC DEVICE AND OPERATION METHOD THEREOF
2y 5m to grant Granted Apr 07, 2026
Patent 12578827
DISPLAY SUBSTRATE AND DISPLAY DEVICE
2y 5m to grant Granted Mar 17, 2026
Patent 12572215
ELECTRONIC DEVICE, AND METHOD FOR PREVENTING/REDUCING MISRECOGNITION OF GESTURE IN ELECTRONIC DEVICE
2y 5m to grant Granted Mar 10, 2026
Patent 12573159
FUTURE POSE PREDICTOR FOR A CONTROLLER
2y 5m to grant Granted Mar 10, 2026
Patent 12566514
TOUCH STRUCTURE HAVING THROUGH HOLES ON OVERLAPPING PARTS AND DISPLAY PANEL
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
78%
Grant Probability
82%
With Interview (+4.4%)
2y 0m
Median Time to Grant
Moderate
PTA Risk
Based on 589 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month