DETAILED ACTION
Applicant’s arguments, filed on 11/19/2025, have been fully considered. The following rejections and/or objections are either reiterated or newly applied. They constitute the complete set presently being applied to the instant application.
Applicants have amended their claims, filed on 11/19/2025, and therefore rejections newly made in the instant office action have been necessitated by amendment.
Claims 1-12 are the current claims hereby under examination.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Objections
Claim 9 is objected to because of the following informalities:
In claim 9, line 3, “the input processing unit determine” should read “the input processing unit determines”
Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 1-12 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
Regarding claim 1, applicant has added the limitation “more complex in colors, shapes, or patterns than a predetermined level” in lines 17-18, which is not described in the originally filed claims, specification, or drawings to support this newly added limitation. Thus, the newly added limitation is deemed to be new matter. In particular, there is no description of a predetermined level in which the complexity of the object is compared to in the claims or the specification. Therefore, the claim fails the new matter requirement and is rejected under 35 U.S.C. 112(a) or pre-AIA 35 U.S.C. 112, first paragraph. Claims 2-10 are also rejected due to their dependence on claim 1.
Regarding claim 11, applicant has added the limitation “more complex in colors, shapes, or patterns than a predetermined level” in lines 13-14, which is not described in the originally filed claims, specification, or drawings to support this newly added limitation. Thus, the newly added limitation is deemed to be new matter. In particular, there is no description of a predetermined level in which the complexity of the object is compared to in the claims or the specification. Therefore, the claim fails the new matter requirement and is rejected under 35 U.S.C. 112(a) or pre-AIA 35 U.S.C. 112, first paragraph. Claim 12 is also rejected due to its dependence on claim 11.
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-12 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Regarding claim 1, the claim recites the limitation “stimulating elements that are more complex in colors, shapes, or patterns than a predetermined level” in lines 17-18. It is unclear what constitutes as “more complex” than a predetermined level, as no predetermined level defined in the claims or the specification, and the claim limitation of more complex is unclear. The broad and indefinite scope of the limitation fails to inform a person of ordinary skill in the art with reasonable certainty of the metes and bounds of the claimed invention, therefore the claim is rendered indefinite. For purposes of examination, it is being interpreted as referring to an object that contains more than one color, is more than a basic shape such as a circle or a rectangle, or an object that has a pattern. Claims 2-10 are also rejected due to their dependence on claim 1.
Regarding claim 11, the claim recites the limitation “stimulating elements that are more complex in colors, shapes, or patterns than a predetermined level” in lines 13-14. It is unclear what constitutes as “more complex” than a predetermined level, as no predetermined level defined in the claims or the specification, and the claim limitation of more complex is unclear. The broad and indefinite scope of the limitation fails to inform a person of ordinary skill in the art with reasonable certainty of the metes and bounds of the claimed invention, therefore the claim is rendered indefinite. For purposes of examination, it is being interpreted as referring to an object that contains more than one color, is more than a basic shape such as a circle or a rectangle, or an object that has a pattern. Claim 12 is also rejected due to its dependence on claim 11.
Regarding claim 12, the claim recites the limitation “a program” in line 1. It is unclear if this limitation is referring to the program from claim 11, line 3, or a different program. If it is referring to the program from claim 11, it needs to refer back to it. If it is referring to a different program, it needs to be distinguished from the program from claim 11. For purposes of examination, it is being interpreted as referring to the program from claim 11.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-5 and 7-12 are rejected under 35 U.S.C. 103 as being unpatentable over Goto (JP 2019208758) in view of Shudo (JP 2019171022) and Gordon (US 20170337476). Citations to JP 2019208758 and JP 2019171022 will refer to the English Machine Translations that accompany this Office Action.
Regarding independent claim 1, Goto teaches an evaluation device for evaluating a test subject's ability to identify an object (Abstract: “The cognitive function measurement device 10 provides a plurality of tasks to a subject based on the acquired task data, detects the answer elements and answer time for each provided task, calculates a score for each provided task based on at least one of the detected answer elements and answer time”), the evaluation device comprising:
an input unit, comprising one or more processors ([0048]: “a data processing unit 160 that performs various types of data processing”), configured to receive input of the test subject ([0077]: “The operation unit 190 is used for inputting operation data by the subject, and its function can be realized by a touch panel or a touch panel type display.”);
a display control unit, comprising the one or more processors, configured to display, on a display device ([0076]: “The display control unit 180 executes various processes, such as displaying a task or an image related to the measurement results of cognitive function, to generate an image to be displayed on the display unit 170 .”), a test screen including a target object that the test subject should select and a non- target object that the test subject should not select ([0087]-[0088]: “a plurality of figures (hereinafter referred to as "problem figures") are) 30, one specified figure (hereinafter referred to as the "specified figure").) 31 (hereinafter referred to as the "just fit task").) is generated. In this case, the task control unit 162 generates task data and answer data for the Just Fit task by: (A1) randomly setting multiple shapes as task shapes 30; (A2) setting one shape from the multiple set task shapes 30 as the specific shape 31; (A3) placing the specific shape 31 at a specific position in the task display area 33 and display area 34; (A4) randomly placing the other task shapes 30, including their orientation, in the specified display area 34; and repeating the processes of (A1) to (A4) for each task.”; Fig. 3A).
However, Goto does not teach a visual line detection unit comprising an imaging device configured to detect a visual line of the test subject based on a corneal reflection and a pupil position of the test subject.
Shudo discloses an evaluation device and method. Specifically, Shudo teaches a visual line detection unit comprising an imaging device configured to detect a visual line of the test subject based on a corneal reflection and a pupil position of the test subject ([0013]: “FIG. 1 is a perspective view that schematically illustrates an example of an gaze tracking device 100”; [0020]: “The stereo camera device 102 can detect the position of the pupil 112 and the position of the corneal reflection image 113 based on the brightness of the captured image”; [0034]: “The gaze point detection unit 214 detects position data of the subject's gaze point based on the image data of the eyeball 111 acquired by the image data acquisition unit 206. In this embodiment, the position data of the gaze point refers to the position data of the intersection between the subject's line of sight vector defined in a three-dimensional global coordinate system and the display screen 101S of the display device 101. The gaze point detection unit 214 detects the gaze vectors of the left and right eyeballs 111 of the subject based on the position data of the pupil center and the position data of the corneal curvature center acquired from the image data of the eyeballs 111 . After the line-of-sight vector is detected, the gaze point detection unit 214 detects position data of the gaze point indicating the intersection of the line-of-sight vector and the display screen 101S”). Goto and Shudo are analogous arts as they are both related to systems that evaluate a user’s cognitive function.
Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to include the visual line detections from Shudo into the device from Goto as it allows the device to determine where the user is looking, which can ensure measurements are only taken when the user is looking at the correct location, which can ensure more accurate measurements.
The Goto/Shudo combination teaches an input processing unit, comprising the one or more processors, configured to determine whether or not the target object is selected on a basis of the input of the test subject received by the input unit; and an evaluation unit, comprising the one or more processors, configured to evaluate the test subject's ability to identify the object on a basis of a response time required for the test subject to input when the input processing unit determines that the target object is selected (Goto, [0096]: “when the task control unit 162 presents a task based on task data presenting the above-mentioned just-fit task, it: (a1) starts measurement from the timing when the task starts accepting answers; (a2) each time the subject selects one of the task figures as an answer, it refers to the answer data and judges whether it is correct or not; (a3) if the answer is incorrect, it continues to perform the task (accepts the selection of the next task figure); if the answer is correct, it specifies the time up to that time as the answer time and ends the task; and (a4) it repeats (a1) to (a3) until all tasks are completed.”).
However, the Goto/Shudo combination does not include the visual line in the evaluation.
Shudo teaches including the visual in the evaluation ([0040]: “The memory unit 222 also stores an evaluation program that causes a computer to execute the following processes: a process for displaying an image; a process for detecting the position of the gaze point of a subject observing the display screen; a process for performing display operations including a first display operation for displaying a specific object on the display screen; and a second display operation for displaying the specific object and multiple comparison objects different from the specific object on the display screen after the first display operation has been performed; a process for setting a specific area corresponding to the specific object and a comparison area corresponding to each comparison object on the display screen; a process for determining, based on the position data of the gaze point, whether or not the gaze point is present in the specific area and the comparison area during the display period in which the second display operation is performed, and outputting the determination data; a process for calculating movement progress data that indicates the progress of movement of the gaze point during the display period based on the determination data; a process for obtaining evaluation data for the subject based on the movement progress data; and a process for outputting the evaluation data.”).
Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to include the visual line in the evaluation from Shudo into the Goto/Shudo combination as it allows the combination to include the visual line in the evaluation and account for whether the user is looking in the correct location, which can allow for a more accurate analysis.
However, the Goto/Shudo combination does not teach wherein the target object and the non-target object are stereoscopic figures having stimulating elements that are more complex in colors, shapes, or patterns than a predetermined level, wherein the non-target object is a person image of a person, and the target object is the person image subjected to predetermined processing.
Gordon discloses an emotional and cognitive evaluation device. Specifically, Gordon teaches wherein the target object and the non-target object are stereoscopic figures having stimulating elements that are more complex in colors, shapes, or patterns than a predetermined level, wherein the non-target object is a person image of a person, and the target object is the person image subjected to predetermined processing ([0096]: “The hardware display surface may render or cause the display of one or more images for generating a view or a stereoscopic image of one or more computer generated virtual objects. For illustrative purposes, an object can be an item, data, device, person, place, or any type of entity”). Goto, Shudo, and Gordon are analogous arts as they are all related to systems that evaluate a user’s cognitive function.
Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to include the stereoscopic figures of a person as the objects from Gordon into the Goto/Shudo combination as they are both known objects to use for cognitive assessment, therefore it would be a simple substitution.
The Goto/Shudo/Gordon combination teaches wherein the target object is the person image (Gordon, [0096]: “The hardware display surface may render or cause the display of one or more images for generating a view or a stereoscopic image of one or more computer generated virtual objects. For illustrative purposes, an object can be an item, data, device, person, place, or any type of entity”) subjected to predetermined processing (Goto, Figure 3A shows the target object in rotated position, which teaches on a predetermined processing.).
Regarding claim 2, the Goto/Shudo/Gordon combination teaches the evaluation device according to claim 1, wherein when the input of the test subject received by the input unit is an input corresponding to selection of the target object or the non-target object, the input processing unit is configured to receive the input of the test subject as an object selection input, and when the input processing unit determines that the target object is selected, the evaluation unit is configured to determine, as the response time, a time from when the display control unit displays the test screen to when the input processing unit receives the object selection input or a time from when the input processing unit receives the object selection input to when the input processing unit receives the object selection input (Goto, [0077]: “The operation unit 190 is used for inputting operation data by the subject”; [0096]: “when the task control unit 162 presents a task based on task data presenting the above-mentioned just-fit task, it: (a1) starts measurement from the timing when the task starts accepting answers; (a2) each time the subject selects one of the task figures as an answer, it refers to the answer data and judges whether it is correct or not; (a3) if the answer is incorrect, it continues to perform the task (accepts the selection of the next task figure); if the answer is correct, it specifies the time up to that time as the answer time and ends the task; and (a4) it repeats (a1) to (a3) until all tasks are completed.”), and determine an index of the ability to identify the object on the basis of the determined response time to evaluate the test subject's ability to identify the object (Goto, [0098]: “The score calculation unit 163 calculates a score for each provided task based on at least one of the detected answer elements and the answer time.”).
Regarding claim 3, the Goto/Shudo/Gordon combination teaches the evaluation device according to claim 1, wherein the display control unit is configured to display, on the display device, the test screen including the target object (Goto, [0087]-[0088]: “a plurality of figures (hereinafter referred to as "problem figures") are) 30, one specified figure (hereinafter referred to as the "specified figure").) 31 (hereinafter referred to as the "just fit task").) is generated. In this case, the task control unit 162 generates task data and answer data for the Just Fit task by: (A1) randomly setting multiple shapes as task shapes 30; (A2) setting one shape from the multiple set task shapes 30 as the specific shape 31; (A3) placing the specific shape 31 at a specific position in the task display area 33 and display area 34; (A4) randomly placing the other task shapes 30, including their orientation, in the specified display area 34; and repeating the processes of (A1) to (A4) for each task.”; Fig. 3A), and the evaluation unit is configured to evaluate the test subject’s ability to identify the object by determining the response time in each case where the input processing unit determines that the target object is selected (Goto, [0077]: “The operation unit 190 is used for inputting operation data by the subject”; [0096]: “when the task control unit 162 presents a task based on task data presenting the above-mentioned just-fit task, it: (a1) starts measurement from the timing when the task starts accepting answers; (a2) each time the subject selects one of the task figures as an answer, it refers to the answer data and judges whether it is correct or not; (a3) if the answer is incorrect, it continues to perform the task (accepts the selection of the next task figure); if the answer is correct, it specifies the time up to that time as the answer time and ends the task; and (a4) it repeats (a1) to (a3) until all tasks are completed.”), and determining an index of the ability to identify the object on the basis of the determined response time (Goto, [0098]: “The score calculation unit 163 calculates a score for each provided task based on at least one of the detected answer elements and the answer time.”).
Regarding claim 4, the Goto/Shudo/Gordon combination teaches the evaluation device according to claim 1.
However, the Goto/Shudo/Gordon combination does not teach wherein the display control unit is configured to display the test screen including the non-target object that has a predetermined similarity to the target object in figure feature, color, or pattern target and is different from the target object in the figure feature, the color, or the pattern target.
Goto teaches wherein the display control unit is configured to display the test screen including the non-target object that has a predetermined similarity to the target object in figure feature, color, or pattern target and is different from the target object in the figure feature, the color, or the pattern target (Goto, Fig. 3A shows using similar but different shapes, such as all the objects being puzzle pieces, yet differently shaped puzzle pieces.).
Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to include using similar but different objects from Goto into the Goto/Shudo/Gordon combination as it allows the target and non-target object to be similar, which can increase difficulty in the task, yet different so the difference between the target and non-target objects can be identified, which can lead to accurate and comprehensive cognitive assessment of the user.
Regarding claim 5, the Goto/Shudo/Gordon combination teaches the evaluation device according to claim 1.
However, the Goto/Shudo/Gorgon combination does not teach wherein the display control unit is configured to display, on the display device, the test screen including the target object and the non-target object, rotated at a random angle around a predetermined position on each of the target object and the non-target object.
Goto teaches wherein the display control unit is configured to display, on the display device, the test screen including the target object and the non-target object, rotated at a random angle around a predetermined position on each of the target object and the non-target object (Goto, Fig. 3A).
Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to include the rotation from Goto into the Goto/Shudo/Gordon combination as it allows the objects to be rotated to make the identification more complex, which can test a higher level of cognitive function of the individual and allow for a more comprehensive analysis to be performed.
Regarding claim 7, the Goto/Shudo/Gordon combination teaches the evaluation device according to claim 1, wherein the display control unit is configured to sequentially display different test screens on the display device while satisfying a predetermined condition, and the evaluation unit is configured to evaluate the test subject's ability to identify the object by determining the response time on each of the test screens (Goto, [0096]: “when the task control unit 162 presents a task based on task data presenting the above-mentioned just-fit task, it: (a1) starts measurement from the timing when the task starts accepting answers; (a2) each time the subject selects one of the task figures as an answer, it refers to the answer data and judges whether it is correct or not; (a3) if the answer is incorrect, it continues to perform the task (accepts the selection of the next task figure); if the answer is correct, it specifies the time up to that time as the answer time and ends the task; and (a4) it repeats (a1) to (a3) until all tasks are completed.”), and determining an index of the ability to identify the object on the basis of the determined response time (Goto, [0098]: “The score calculation unit 163 calculates a score for each provided task based on at least one of the detected answer elements and the answer time.”).
Regarding claim 8, the Goto/Shudo/Gordon combination teaches the evaluation device according to claim 1, wherein the input processing unit is configured to determine whether or not the target object or the non-target object is selected on the basis of the input of the test subject received by the input unit (Goto, [0096]: “when the task control unit 162 presents a task based on task data presenting the above-mentioned just-fit task, it: (a1) starts measurement from the timing when the task starts accepting answers; (a2) each time the subject selects one of the task figures as an answer, it refers to the answer data and judges whether it is correct or not; (a3) if the answer is incorrect, it continues to perform the task (accepts the selection of the next task figure)), and the evaluation unit is configured to output information indicative of not evaluating the ability to identify the object or not being able to evaluate the ability to identify the object in a case where a ratio between a first number of times the input processing unit determines that the target object is selected and a second number of times the input processing unit determines that the non-target object is selected falls within a predetermined range (Goto, [0100]: “for the just-fit task, the score calculation unit 163 calculates the score based on the percentage of correct answers until each task is answered correctly and the time required for each answer. In other words, in this case, the score calculation unit 163 adds the maximum score for each task if the task is answered without making a single mistake and in zero seconds of answer time, and adds less points for each mistake or each time the answer time becomes longer, and calculates the total of all tasks as the score for that task.”).
Regarding claim 9, the Goto/Shudo/Gordon combination teaches the evaluation device according to claim 1, wherein the input processing unit determine that the target object is not selected in a case where the detected visual line is not present in a predetermined region including the object in the test screen displayed by the display control unit ([0040]: “The memory unit 222 also stores an evaluation program that causes a computer to execute the following processes: a process for displaying an image; a process for detecting the position of the gaze point of a subject observing the display screen; a process for performing display operations including a first display operation for displaying a specific object on the display screen; and a second display operation for displaying the specific object and multiple comparison objects different from the specific object on the display screen after the first display operation has been performed; a process for setting a specific area corresponding to the specific object and a comparison area corresponding to each comparison object on the display screen; a process for determining, based on the position data of the gaze point, whether or not the gaze point is present in the specific area and the comparison area during the display period in which the second display operation is performed, and outputting the determination data; a process for calculating movement progress data that indicates the progress of movement of the gaze point during the display period based on the determination data; a process for obtaining evaluation data for the subject based on the movement progress data; and a process for outputting the evaluation data.”).
Regarding claim 10, the Goto/Shudo/Gordon combination teaches the evaluation device according to claim 1, wherein the evaluation unit is configured to evaluate the test subject’s ability to identify the object on a basis of a ratio between a first number of times the input processing unit determines that the target object is selected and a second number of times the input processing unit determines that the non-target object is selected, and the response time required for the test subject to input when the input processing unit determines that the target object is selected (Goto, [0100]: “for the just-fit task, the score calculation unit 163 calculates the score based on the percentage of correct answers until each task is answered correctly and the time required for each answer. In other words, in this case, the score calculation unit 163 adds the maximum score for each task if the task is answered without making a single mistake and in zero seconds of answer time, and adds less points for each mistake or each time the answer time becomes longer, and calculates the total of all tasks as the score for that task.”).
Regarding independent claim 11, Goto teaches a method for evaluating a test subject's ability to identify an object (Abstract: “The cognitive function measurement device 10 provides a plurality of tasks to a subject based on the acquired task data, detects the answer elements and answer time for each provided task, calculates a score for each provided task based on at least one of the detected answer elements and answer time”), the method performed by an evaluation device comprising a processor and a storage device storing a program for performing the method ([0060]: “when a predetermined operation is performed using a specific storage medium stored in the recording/playback unit 150, for example, various data are expanded and stored in each data and database in the data storage unit 130”; [0048]: “a data processing unit 160 that performs various types of data processing”), the method comprising:
displaying, on a display device, a test screen including a target object that the test subject should select and a non-target object that the test subject should not select ([0076]: “The display control unit 180 executes various processes, such as displaying a task or an image related to the measurement results of cognitive function, to generate an image to be displayed on the display unit 170 .”; [0087]-[0088]: “a plurality of figures (hereinafter referred to as "problem figures") are) 30, one specified figure (hereinafter referred to as the "specified figure").) 31 (hereinafter referred to as the "just fit task").) is generated. In this case, the task control unit 162 generates task data and answer data for the Just Fit task by: (A1) randomly setting multiple shapes as task shapes 30; (A2) setting one shape from the multiple set task shapes 30 as the specific shape 31; (A3) placing the specific shape 31 at a specific position in the task display area 33 and display area 34; (A4) randomly placing the other task shapes 30, including their orientation, in the specified display area 34; and repeating the processes of (A1) to (A4) for each task.”; Fig. 3A).
However, Goto does not teach detecting a visual line of the test subject based on a corneal reflection and a pupil position of the test subject using an imaging device.
Shudo discloses an evaluation device and method. Specifically, Shudo teaches detecting a visual line of the test subject based on a corneal reflection and a pupil position of the test subject using an imaging device ([0013]: “FIG. 1 is a perspective view that schematically illustrates an example of an gaze tracking device 100”; [0020]: “The stereo camera device 102 can detect the position of the pupil 112 and the position of the corneal reflection image 113 based on the brightness of the captured image”; [0034]: “The gaze point detection unit 214 detects position data of the subject's gaze point based on the image data of the eyeball 111 acquired by the image data acquisition unit 206. In this embodiment, the position data of the gaze point refers to the position data of the intersection between the subject's line of sight vector defined in a three-dimensional global coordinate system and the display screen 101S of the display device 101. The gaze point detection unit 214 detects the gaze vectors of the left and right eyeballs 111 of the subject based on the position data of the pupil center and the position data of the corneal curvature center acquired from the image data of the eyeballs 111 . After the line-of-sight vector is detected, the gaze point detection unit 214 detects position data of the gaze point indicating the intersection of the line-of-sight vector and the display screen 101S”). Goto and Shudo are analogous arts as they are both related to systems that evaluate a user’s cognitive function.
Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to include the visual line detections from Shudo into the device from Goto as it allows the device to determine where the user is looking, which can ensure measurements are only taken when the user is looking at the correct location, which can ensure more accurate measurements.
The Goto/Shudo combination teaches determining whether or not the target object is selected based on the input of the test subject; and evaluating the test subject's ability to identify the object on a basis of a response time required for the test subject to input when the target object is determined to have been selected (Goto, [0096]: “when the task control unit 162 presents a task based on task data presenting the above-mentioned just-fit task, it: (a1) starts measurement from the timing when the task starts accepting answers; (a2) each time the subject selects one of the task figures as an answer, it refers to the answer data and judges whether it is correct or not; (a3) if the answer is incorrect, it continues to perform the task (accepts the selection of the next task figure); if the answer is correct, it specifies the time up to that time as the answer time and ends the task; and (a4) it repeats (a1) to (a3) until all tasks are completed.”).
However, the Goto/Shudo combination does not include the visual line in the evaluation.
Shudo teaches including the visual in the evaluation ([0040]: “The memory unit 222 also stores an evaluation program that causes a computer to execute the following processes: a process for displaying an image; a process for detecting the position of the gaze point of a subject observing the display screen; a process for performing display operations including a first display operation for displaying a specific object on the display screen; and a second display operation for displaying the specific object and multiple comparison objects different from the specific object on the display screen after the first display operation has been performed; a process for setting a specific area corresponding to the specific object and a comparison area corresponding to each comparison object on the display screen; a process for determining, based on the position data of the gaze point, whether or not the gaze point is present in the specific area and the comparison area during the display period in which the second display operation is performed, and outputting the determination data; a process for calculating movement progress data that indicates the progress of movement of the gaze point during the display period based on the determination data; a process for obtaining evaluation data for the subject based on the movement progress data; and a process for outputting the evaluation data.”).
Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to include the visual line in the evaluation from Shudo into the Goto/Shudo combination as it allows the combination to include the visual line in the evaluation and account for whether the user is looking in the correct location, which can allow for a more accurate analysis.
However, the Goto/Shudo combination does not teach wherein the target object and the non-target object are stereoscopic figures having stimulating elements that are more complex in colors, shapes, or patterns than a predetermined level, wherein the non-target object is a person image of a person, and the target object is the person image subjected to predetermined processing.
Gordon discloses an emotional and cognitive evaluation device. Specifically, Gordon teaches wherein the target object and the non-target object are stereoscopic figures having stimulating elements that are more complex in colors, shapes, or patterns than a predetermined level, wherein the non-target object is a person image of a person, and the target object is the person image subjected to predetermined processing ([0096]: “The hardware display surface may render or cause the display of one or more images for generating a view or a stereoscopic image of one or more computer generated virtual objects. For illustrative purposes, an object can be an item, data, device, person, place, or any type of entity”). Goto, Shudo, and Gordon are analogous arts as they are all related to systems that evaluate a user’s cognitive function.
Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to include the stereoscopic figures of a person as the objects from Gordon into the Goto/Shudo combination as they are both known objects to use for cognitive assessment, therefore it would be a simple substitution.
The Goto/Shudo/Gordon combination teaches wherein the target object is the person image (Gordon, [0096]: “The hardware display surface may render or cause the display of one or more images for generating a view or a stereoscopic image of one or more computer generated virtual objects. For illustrative purposes, an object can be an item, data, device, person, place, or any type of entity”) subjected to predetermined processing (Goto, Figure 3A shows the target object in rotated position, which teaches on a predetermined processing.).
Regarding claim 12, the Goto/Shudo/Gordon combination teaches a non-transitory computer readable medium storing a program, wherein executing of the program causes a computer to execute the method according to claim 11 (Goto, [0060]: “when a predetermined operation is performed using a specific storage medium stored in the recording/playback unit 150, for example, various data are expanded and stored in each data and database in the data storage unit 130”).
Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over the Goto/Shudo/Gordon combination as applied to claim 1 above, and further in view of Kobayashi (JP 2017205191). Citations to JP 2017205191 will refer to the English Machine Translation that accompanies this Office Action.
Regarding claim 6, the Goto/Shudo/Gordon combination teaches the evaluation device according to claim 1, wherein the display control unit is configured to display, on the display device, the test screen including the target object ([0077]: “The operation unit 190 is used for inputting operation data by the subject”; [0096]: “when the task control unit 162 presents a task based on task data presenting the above-mentioned just-fit task, it: (a1) starts measurement from the timing when the task starts accepting answers; (a2) each time the subject selects one of the task figures as an answer, it refers to the answer data and judges whether it is correct or not; (a3) if the answer is incorrect, it continues to perform the task (accepts the selection of the next task figure); if the answer is correct, it specifies the time up to that time as the answer time and ends the task; and (a4) it repeats (a1) to (a3) until all tasks are completed.”), and the target object is a user interface (UI) object selectable by the test subject ([0024]: “The present invention is also configured such that the task providing means provides a task selected based on the selection instruction of the subject”).
However, the Goto/Shudo/Gordon combination does not teach the UI object including a message for the test subject.
Kobayashi teaches a device for measuring identification functions of a subject. Specifically, Kobayashi teaches the UI object including a message for the test subject ([0060]: “At this time, the determination of whether the image that is the same as or similar to the target image from the subject image is correct or incorrect is deemed "incorrect," and a message is displayed”). Goto and Kobayashi are analogous arts as they are both related to systems that measure a user’s identification functions.
Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to include the message from Kobayashi into the device from the Goto/Shudo/Gordon combination as it allows for the device to provide a message to the subject, which can inform them of the processes that are occurring and let the user be more knowledgeable of the process.
Response to Arguments
All of applicant’s argument regarding the rejections and objections previously set forth have been fully considered and are persuasive unless directly addressed subsequently.
Applicant amended the claims to overcome the claim objections and the 112(b) rejections, however the new amendments have introduced new claim objections and 112 rejections.
Applicant’s arguments with respect to the 102 and 103 rejections of claims 1-12 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ERIN K MCCORMACK whose telephone number is (703)756-1886. The examiner can normally be reached Mon-Fri 7:30-5.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jason Sims can be reached at 5712727540. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/E.K.M./Examiner, Art Unit 3791
/MATTHEW KREMER/Primary Examiner, Art Unit 3791