DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
Applicant’s amendment was received on 11/3/25 and has been entered and made of record. Currently, claims 1-17 are pending, of which claims 11-17 are newly added.
Claim Rejections - 35 USC § 101
Applicant’s amendment to claims 1, 8, and 9 and the arguments presented in the remarks filed 11/3/25 overcome the rejection set forth in the previous Office Action and have therefore been withdrawn. The claims are deemed to integrate the abstract idea into a practical application and describe an improvement in the technology and technical field.
Double Patenting
Applicant’s amendment to claims 1, 8, and 9 and the filing of Terminal Disclaimers overcome the rejection set forth in the previous Office Action and has therefore been withdrawn.
Response to Arguments
Applicant’s arguments, see pages 18-20 of the remarks, filed 11/3/25, with respect to the rejection(s) of claim(s) 1, 8, and 9 under 35 USC 102(a)(1) have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of newly found prior art necessitated by the current amendment.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-12, 14, 15, and 17 are rejected under 35 U.S.C. 103(a) as being unpatentable over Macknik et al. (US 2016/0106358) in view of Zidan et al. (US 2020/0397288).
Regarding claims 1, 8, and 9, Macknik discloses a non-transitory computer-readable recording medium storing a program, a method, and a recovery level estimation device comprising:
a memory storing instructions (see Fig. 4 and paras 30-31, memory 42); and
one or more processors (see Fig. 4 and paras 30-31, processor(s) 38) configured to execute the instructions to:
acquire images in which eyes of a patient are captured (see paras 17-18 and 30-31, eye tracking device 32 acquires images of patient’s eyes);
extract an eye movement feature which is a feature of an eye movement based on the images (see paras 16-20 and 23, eye movement features, such as microsaccades, are extracted); and
estimate a rehabilitation recovery level of the patient based on the eye movement feature by using a rehabilitation recovery level estimation model, wherein the rehabilitation recovery level estimation model has been learned by machine learning in advance (see paras 20 and 28, based on extracted eye movement features/dynamics, a trained algorithm can generate a report regarding the response to treatment or assessment of progression, both are considered recovery level estimations).
Macknik does not disclose expressly wherein the rehabilitation recovery level estimation model has been trained by pairs of eve movement features, as input data, and correct answer information for rehabilitation recovery levels as correct data, and the images are captured by a high-speed camera, during presentation of a moving light spot along predetermined positions on a two-dimensional grid, and the images include timestamps synchronized to grid positions of the two-dimensional grid.
Zidan discloses wherein the rehabilitation recovery level estimation model has been trained by pairs of eve movement features, as input data, and correct answer information for rehabilitation recovery levels as correct data (see paras 159, 185, 188, 223-228, 236, 238, 247-248, 250-251, and 265-266, artificial intelligence, such as neural networks, are trained with sensed parameters from ocular testing of patients, historical patient data can be stored and used for comparison to determine abnormality progression, or the other side of the same coin, abnormality regression, or put another way, rehabilitation recovery), and
the images are captured by a high-speed camera, during presentation of a moving light spot along predetermined positions on a two-dimensional grid, and the images include timestamps synchronized to grid positions of the two-dimensional grid (see Figs. 34-36 and paras 193, 196, 208, 258, and 261, a high-speed camera captures images of a patient’s eyes 161 and 163 during a test in which a patient is instructed to follow a red dot with their eyes, the images are timestamped).
Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to combine the high-speed camera capturing ocular test data using moving light spots, as described by Zidan, with the system of Macknik.
The suggestion/motivation for doing so would have been to provide enhanced effectiveness, reliability, accuracy, and efficiency in detecting eye abnormalities (para 81 of Zidan).
Therefore, it would have been obvious to combine Zidan with Macknik to obtain the invention as specified in claims 1, 8, and 9.
Regarding claim 2, Macknik further discloses wherein the eye movement feature includes eye vibration information concerning vibrations of the eyes (see paras 15-16, 18, and 20, eye vibration, such as SWJ, can be detected).
Regarding claim 3, Macknik further discloses wherein the eye movement feature includes information concerning one or more of a bias of movement directions of the eyes and a misalignment of right and left movements (see paras 15, 17-18, and 20, saccades and microsaccades are detected).
Regarding claim 4, Macknik further discloses wherein the one or more processors are further comprising a task presentation means configured to present a task concerning eye movements, acquire the images of the eyes of the patient whom the task is presented (see paras 18 and 22, visual stimuli or a visual task is presented to the patient), and
extract the eye movement feature in the task based on the images (see paras 18 and 22-23, eye movement is detected during the visual task).
Regarding claim 5, Macknik further discloses wherein the eye movement feature includes visual field defect information concerning a visual field defect (see paras 16-20 and 28, a visual field defect, such as oculomotor disease, can be detected).
Regarding claim 6, Macknik further discloses wherein the one or more processors are further configured to store patient information concerning one or more of an attribute of the patient and previous recovery records of the patient, and estimate a recovery level of the patient based on the patient information and the eye movement feature (see paras 20-21 and 26-29, both normal and abnormal patient information is compared and used to determine progression and/or response to treatment).
Regarding claim 7, Macknik further discloses wherein the one or more processors further configured to output an alert in response to the recovery level of the patient that is worse than a threshold value (see paras 19-20 and 28, a report is generated regarding the response to treatment or assessment of progression, thresholds can be used to determine defects, such as SWJ).
Regarding claim 10, Macknik further discloses wherein the one or more processors are further configured to output the alert with respect to a medical professional in order for the medical professional to optimize a rehabilitation plan of the patient (see paras 19-20 and 28, a report is generated regarding the response to treatment or assessment of progression, thresholds can be used to determine defects, such as SWJ).
Regarding claim 11, Zidan further discloses wherein the one or more processors are further configured to use, as eye vibration information, information concerning a time-series change of xy coordinates of a center of a pupil for each of a right eye and a left eye of the eyes of the patient (see paras 185, 188, 190, 196, 208, 223-225, 227-228, and 261, saccades are detected during tests utilizing a high-speed camera that captures images of a patient’s eyes 161 and 163 in which a patient is instructed to follow a red dot with their eyes, pupil size is also detected).
Regarding claim 12, Zidan further discloses wherein the one or more processors are further configured to use, as eye vibration information, frequency information extracted by a Fast Fourier Transform (FFT) within any of a plurality of time segments (see para 261, the medical assembly 110 may sense the duction of the eyes for a designated eye test over a timeline having timestamps at increments of milliseconds, the test may begin at timestamp zero, and the duction of the subject 112 may continue for a duration of nine hundred milliseconds or timestamps, at each millisecond mark or timestamp, the medical assembly 110 is operable to capture and store a photograph or image of each eye 161, 163 of the subject 112, resulting in eighteen hundred eye images, eighteen hundred corresponding image files 365, and eighteen hundred corresponding timestamp values, this is a description of extracting information using a FFT).
Regarding claim 14, Zidan further discloses wherein the one or more processors are further configured to acquire the information concerning the one or more of the bias of the movement directions of the eyes by: obtaining a totaled value by totaling, on a time axis, an inner product of angles formed by the movement directions of the eyes of the patient; and at least one of: determining, based on the totaled value, the misalignment to be greater as the totaled value is greater, and determining, based on the totaled value, the misalignment to be greater as the totaled value is smaller (see paras 125, 244, 264, and 268, threshold values are used to determine an identified abnormality).
Regarding claim 15, Zidan further discloses wherein the one or more processors are further configured to acquire the information concerning bias of movement directions of the eyes by at least one of: determining a size of an area where a tracking failure is determined to occur at or above a first predetermined frequency and as the patient attempts to track a light spot presented to the patient, and dividing a light spot display area into virtual squares and counting squares where the tracking failure is determined to occur at or above a second predetermined frequency (see paras 244, 264, and 268, threshold values are used to determine an identified abnormality).
Regarding claim 17, Zidan further discloses wherein the high-speed camera is configured to capture the images at least at 1,000 frames per second (see para 261, 1,800 images captured in 900 milliseconds is at least 1,000 frames per second).
Allowable Subject Matter
Claims 13 and 16 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MARK R MILIA whose telephone number is (571)272-7408. The examiner can normally be reached Monday-Friday, 8am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Akwasi Sarpong can be reached at 571-270-3438. The fax number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MARK R MILIA/Primary Examiner, Art Unit 2681