DETAILED ACTION
Applicant' s arguments, filed 01/22/2026 have been fully considered. The following rejections and/or objections are either reiterated or newly applied. They constitute the complete set presently being applied to the instant application.
Applicants have amended their claims, filed 01/27/2023, and therefore rejections newly made in the instant office action have been necessitated by amendment.
Claims 1, 4-9, 11, 13, and 16-20 are the current claims hereby under examination.
All references to Applicant’s specification are made using the paragraph numbers assigned in the US publication of Applicant’s application US 2024/0252083 A1.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are:
A tracking unit of claim 1
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
A tracking unit of claim 1 is being interpreted as an optical tracking unit comprising a camera as described by the specification in paragraphs 0012-0013 0043 and 0055.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
It is noted that a display unit and a processing unit of claim 1 is not interpreted under 35 USC 112(f) because the claim limitation(s) recite(s) sufficient structure, materials, or acts to entirely perform the recited function
Claim Rejections - 35 USC § 112(b)
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1, 4-9, 11, 13, and 16-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 1 recites “a processing unit configured to evaluate at least one fixation time or dwell time in the recorded gaze data by comparing the at least one fixation time or dwell time to a reference value” but it is unclear what the output of this evaluation is and how it relates to the claimed device. In particular, the evaluation is described as a comparison of data to a reference value but it is unclear how the output of the comparison (the recorded values being less than, equal to, or greater than the reference value) is utilized by the device or how it relates to the recited purpose of the device of “testing visual processing capabilities”. The following limitations describe the nature of the data but do not convey the purpose of the evaluation. For the purposes of this examination, the limitation will be interpreted as any use for the recited evaluation.
Claim 1 recites “a device for testing visual processing capabilities of a subject” but it is unclear how the claimed device results in “testing visual processing capabilities of a subject” as none of the claimed limitations appear to output a visual processing capability test result. In particular, it is unclear how the output of the evaluation relates to visual processing capabilities. For the purposes of this examination, the device will be interpreted as an autism assessment device. This rejection is further applied to the similar recitations of claim 13.
Claim 1 recites “wherein the at least one fixation time or dwell time is associated with the subject viewing the area in the visual stimulus picture located in the left hemifield of the subject’s visual field” but it is unclear if this limitation is meant to convey that all of the fixation or dwell time parameters are associated with the left hemifield of the user’s vision (i.e. only measurements taken in the left hemifield are considered) or if at least some of the fixation or dwell time parameters are associated with the left hemifield of the user’s vision (i.e. both right and left hemifield measurements are considered, but at least some of the considered measurements must be from the left hemifield). For the purposes of this examination, the limitation is interpreted as at least some emphasis or consideration being placed on measurements from a left hemifield. This rejection is further applied to the similar recitations of claim 13.
Claims 4-9 and 11 are rejected by virtue of their dependence on claim 1.
Claims 16-20 are rejected by virtue of their dependence on claim 13.
Claim 18 recites “predicting an autism status of the subject” but it is unclear how this prediction relates to the claimed method. It is unclear what measurements or evaluations are used as inputs to the claimed prediction. For the purposes of this examination, the limitation will be interpreted as any autism status prediction.
Claim Rejections - 35 USC § 112(a)
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 11, 18, and 20 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
Claim 11 recites ““wherein the processing unit is configured to perform a machine learning algorithm to predict the autism status of the subject based on the evaluation of the at least one fixation time or dwell time in the recorded gaze data”, , but the specification does not fully support, any and all possible machine learning algorithms being used to predict the autism status of the subject. In particular the particular structure and/or training method of the machine learning model does not appear to be disclosed. Paragraphs 0042 states that a variety of machine learning models may be utilized to predict autism status however the particular structure, training method, or steps carried out by the machine learning models to arrive at the claimed result are not seemingly disclosed. Paragraph 005 recites that the dataset was split into training and validation sets but do not describe the particular training method utilized for the model or the model structure. Paragraphs 0058-0061 and Table 1 describe the results of the analysis of the control and autism group and recite that the model has a given accuracy but the process that the model performs to convert the input features (the evaluation result) into the output (the prediction of autism status) is not seemingly disclosed. This rejection is further applied to claim 20.
Claim 18 recites “predicting the autism status of the subject” but the specification does not fully support any and all possible methods of predicting the autism status of the subject. In particular, the specification appears to indicate that the autism prediction is carried out by comparing a particular subset of the gaze time parameters to standard values (Table 1 and Paragraphs 0058-0061). The claimed scope of any method of predicting autism is not fully supported by the specification as the specification does not provide a particular method or algorithm for performing the claimed function at the recited scope of the claims.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1, 4-9, 11, 13, and 16-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Claims 1, 4-9, 11, 13, and 16-20 are directed to a method of processing eye tracking signals using a computational algorithm, which is an abstract idea. Claims 1, 4-9, 11, 13, and 16-20 do not include additional elements that integrate the exception into a practical application or that are sufficient to amount to significantly more than the judicial exception for the reasons provided below which are in line with the 2014 Interim Guidance on Patent Subject Matter Eligibility (Federal Register, Vol. 79, No. 241, p 74618, December 16, 2014), the July 2015 Update on Subject Matter Eligibility (Federal Register, Vol. 80, No. 146, p. 45429, July 30, 2015), the May 2016 Subject Matter Eligibility Update (Federal Register, Vol. 81, No. 88, p. 27381, May 6, 2016), and the 2019 Revised Patent Subject Matter Eligibility Guidance (Federal Register, Vol. 84, No. 4, page 50, January 7, 2019).
The analysis of claim 1 is as follows:
Step 1: Claim 1 is drawn to a machine.
Step 2A – Prong One: Claim 1 recites an abstract idea. In particular, claim 1 recites the following limitations:
[A1] evaluate the at least one fixation time or dwell time in the recorded gaze data by comparing the at least one fixation time or dwell time to a reference value
This element [A1] of claim 1 is drawn to an abstract idea since it involves a mental process that can be practically performed in the human mind including observation, evaluation, judgment, and opinion and using pen and paper.
Step 2A – Prong Two: Claim 1 recites the following limitations that are beyond the judicial exception:
[A2] a display unit configured to display a visual stimulus picture to the subject
[B2] a tracking unit configured to perform eye tracking to record gaze data of the subject during display of the visual stimulus picture, the gaze data including at least one fixation time or dwell time associated with the gaze of the subject to an area in the visual stimulus picture
[C2] a processing unit
These elements [A2]-[C2] of claim 1 do not integrate the exception into a practical application of the exception. In particular, the elements [A2]-[B2] are merely adding insignificant extra-solution activity to the judicial exception, i.e., mere data gathering at a higher level of generality - see MPEP 2106.04(d) and MPEP 2106.05(g). Furthermore, the element [C2] is merely an instruction to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.04(d) and MPEP 2106.05(f).
Step 2B: Claim 1 does not recite additional elements that amount to significantly more than the judicial exception itself. In particular, the recitations “display a visual stimulus picture to the subject”, and “a tracking unit configured to perform eye tracking to record gaze data of the subject during display of the visual stimulus picture, the gaze data including at least one fixation time or dwell time associated with the gaze of the subject to an area in the visual stimulus picture” are merely insignificant extrasolution activity to the judicial exception, e.g., mere data gathering in conjunction with the abstract idea that uses conventional, routine, and well known elements or simply displaying the results of the algorithm that uses conventional, routine, and well known elements. In particular, the use of a generic display to present a visual stimulus is mere extrasolution activity that is insufficient to incorporate the abstract idea into a practical application. Furthermore, the data acquirer is nothing more than an eye tracking unit for tracking a user’s gaze. Such eye trackers are conventional as evidenced by:
U.S. Patent No. US 12393268 B2 (Cockram) discloses that it is common to use standard cameras to perform eye tracking (paragraph 0036 of Cockram);
U.S. Patent Application Publication No. US 2016/0227113 A1 (Horesh) discloses that eye tracking systems are well-known (paragraph 0001 of Horesh);
U.S. Patent Application Publication No. US 2016/0291690 A1 (Thorn) discloses that gaze tracking systems are conventional (paragraph 0004 of Thorn); and
U.S. Patent Application Publication No. US 2009/0295682 A1 (Qvarfordt) discloses that eye tracking using a camera is conventional (paragraph 0034 of Qvarfordt).
Further, the elements [A2] and [C2] do not qualify as significantly more because this limitation is simply appending well-understood, routine and conventional activities previously known in the industry, specified at a high level of generality, to the judicial exception, e.g., a claim to an abstract idea requiring no more than a generic computer to perform generic computer functions that are well-understood, routine and conventional activities previously known in the industry (see Electric Power Group, 830 F.3d 1350 (Fed. Cir. 2016); Alice Corp. v. CLS Bank Int’l, 110 USPQ2d 1976 (2014)) and/or a claim to an abstract idea requiring no more than being stored on a computer readable medium which is a well-understood, routine and conventional activity previously known in the industry (see Electric Power Group, 830 F.3d 1350 (Fed. Cir. 2016); Alice Corp. v. CLS Bank Int’l, 110 USPQ2d 1976 (2014); SAP Am. v. InvestPic, 890 F.3d 1016 (Fed. Circ. 2018)).
Additionally, the limitations of “wherein the at least one fixation time or dwell time indicates an autism status of the subject, and wherein the at least one fixation time or dwell time is associated with the subject viewing the area in the visual stimulus picture located in the left hemifield of the subject’s visual field” merely describe the nature of the eye tracking data.
In view of the above, the additional elements individually do not integrate the exception into a practical application and do not amount to significantly more than the above-judicial exception (the abstract idea). Looking at the limitations as an ordered combination (that is, as a whole) adds nothing that is not already present when looking at the elements taking individually. There is no indication that the combination of elements improves the functioning of a computer, for example, or improves any other technology. There is no indication that the combination of elements permits automation of specific tasks that previously could not be automated. There is no indication that the combination of elements includes a particular solution to a computer-based problem or a particular way to achieve a desired computer-based outcome. Rather, the collective functions of the claimed invention merely provide conventional computer implementation, i.e., the computer is simply a tool to perform the process.
Claims 4-9 and 11 depend from claim 1, and recite the same abstract idea as claim 1. Furthermore, these claims only contain recitations that further limit the abstract idea (that is, the claims only recite limitations that further limit the algorithm), with the following exceptions:
Claims 4-9: are directed towards limiting the particular data collected by the eye tracker and/or the particular manner or type of visual stimulus being displayed.
Each of these claim limitations does not integrate the exception into a practical application. In particular, the elements of claims 2-9 are merely adding insignificant extra-solution activity to the judicial exception, i.e., mere data gathering at a higher level of generality - see MPEP 2106.04(d) and MPEP 2106.05(g). In particular, none of claims 4-9 recite limitations that amount to more than mere data gathering or extra-solution activity as they merely describe the data being collected.
Additionally, the recitation of a machine learning algorithm is nothing more than the computer implementation/automation of an abstract mental process of screening a patient, which is what a physician typically does with a patient in a diagnostic setting
In view of the above, the additional elements individually do not integrate the exception into a practical application and do not amount to significantly more than the above-judicial exception (the abstract idea). Looking at the limitations of each claim as an ordered combination in conjunction with the claims from which they depend (that is, as a whole) adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer, for example, or improves any other technology. There is no indication that the combination of elements permits automation of specific tasks that previously could not be automated. There is no indication that the combination of elements includes a particular solution to a computer-based problem or a particular way to achieve a desired computer-based outcome. Rather, the collective functions of the claimed invention merely provide conventional computer implementation, i.e., the computer is simply a tool to perform the process.
Claims 13, and 16-20 recites only limitations already addressed in the above presented analysis of claims 1, 4-9 and 11 and is thus rejected on the same basis as claims 1, 4-9 and 11.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 4-9, 13, and 16-19 are rejected under 35 U.S.C. 103 as being unpatentable over Lahvis US Patent Number US 2017/0188930 A1 hereinafter Lahvis in view of Dundas “A Lack of Left Visual Field Bias when Individual with Autism Process Faces” published 10/11/2011 by Journal of Autism and Developmental Disorders, pages 1104-1111.
Regarding claim 1, Lahvis discloses a device for testing visual processing capabilities of a subject (Abstract), comprising
a display unit configured to display a visual stimulus picture to the subject (Paragraph 0024: display system to display animations);
a tracking unit configured to perform eye tracking to record gaze data of the subject during display of the visual stimulus picture, the gaze data including at least one fixation time or dwell time associated with the gaze of the subject to an area in the visual stimulus picture (Paragraphs 0031, 0036, and 0045-0047: image capture devices to calculate the target of the subject’s gaze on the display and time calculations; The amount of time the subject’s gaze is within a region , or dwell, and the latency between the time the character looks towards the subject and the time that the subject looks at the object, or fixation); and
a processing unit configured to evaluate the at least one fixation time or dwell time in the recorded gaze data by comparing the at least one fixation time or dwell time to a reference value (Paragraph 0066: the subject’s responses are compared to threshold values from control subjects; Paragraphs 0045-0047: the subject responses include dwell and latency) wherein the at least one fixation time or dwell time indicates an autism status of the subject (Paragraph 0046: the amount of time a subject’s gaze location on the display is within a predetermined region may be used as a metric for autism assessment; Paragraph 0071-0073: evaluations performed with the data).
Lahvis further teaches that the displayed object and gaze data associated thereto may be in the left hemifield of the subject’s vision (Figs. 2-5 the various objects on the left side of the screen), but fails to disclose any emphasis or acknowledgment of left hemisphere measurements and thus is not considered to teach the limitation of “wherein the at least one fixation time or dwell time is associated with the subject viewing the area in the visual stimulus picture located in the left hemifield of the subject’s visual field” as interpreted in light of the above presented 35 USC 112(b) rejections above.
Lahvis is thus considered to fail to disclose: the device wherein the at least one fixation time or dwell time is associated with the subject viewing the area in the visual stimulus picture located in the left hemifield of the subject’s visual field
Dundas teaches that individuals with autism may not have a left visual field bias that is present is typically developing individuals and that eye tracking technology may be used to detect the presence of the bias (Abstract). Thus, Dundas is reasonably pertinent to the problem at hand.
Dundas teaches that typically developing children have a left visual field bias and that children with autism do not present this same bias. Dundas teaches that determining if the subject presents a left visual field bias or not may be a discriminating parameter for autism diagnosis (Results: paragraphs 2; Discussion paragraphs 1 and 6). Dundas teaches that the visual field bias was performed on the display of faces (Data Reduction: Paragraph 1; Fig. 2) which are displayed in front the user (Procedure: paragraph 1) and thus the left side of the face is in the left hemifield and the right side of the face is in the right hemifield. Dundas further contemplates that visual field bias may be present for stimuli other than faces (Discussion: Paragraph 7). Thus, Dundas teaches that left visual field bias is a discriminating parameter between typically developing patients and patients with autism.
It would have been obvious to one of ordinary skill in the art prior to the effective filling date of the invention to configure the invention of Lahvis to display centered faces and consider left visual field bias for facial processing and other stimuli as taught by Dundas because Dundas teaches that visual field bias may be a discriminating parameter between typical and autism patients and thus considering visual field biases may improve autism detection accuracy.
Regarding claim 4, Lahvis in view of Dundas teaches the device of claim 1. Modified Lahvis further discloses the device wherein the visual stimulus picture includes an image of a face (Paragraph 0052: the face of the character; Fig. 1 reference 140: the displayed characters have faces).
Regarding claim 5, Lahvis in view of Dundas teaches the device of claim 4. Modified Lahvis further discloses the device wherein the at least one fixation time or dwell time are associated with the subject viewing an area in an eye region of the face depicted in the visual stimulus picture (Paragraphs 0046-0047: various time parameters can be extracted for the subject’s gaze in the eye region; Fig. 1 reference 152).
Regarding claim 6, Lahvis in view of Dundas teaches the device of claim 1. Modified Lahvis further discloses the device wherein the at least one fixation time or dwell time is associated with the subject viewing an area in a region of an image of one or more objects or events depicted in the visual stimulus picture (Paragraphs 0043-0047: the time parameters are associated with the subject fixating on various regions of interest).
Regarding claim 7, Lahvis in view of Dundas teaches the device of claim 6. Modified Lahvis further discloses the device wherein a face depicted in the visual stimulus picture has a gaze direction towards one of the one or more objects or events (Paragraph 0051: the attention of the character on the object; Fig. 2 references 210, 206, and 212).
Regarding claim 8, Lahvis in view of Dundas teaches the device of claim 7. Modified Lahvis further suggests the device, wherein the face depicted in the visual stimulus picture changes its gaze direction from one of the one or more objects or events to another.
This limitation is at least suggested by a combination of paragraphs 0051-0059 which describe the sequences of Figs. 2-5 and where the character’s attention may be directed during these sequences, including at the object 206 in Fig. 2 and at another character 406 in Fig. 4. Paragraphs 0060 and 0071 which describe how the subject views each of the depicted animations to produce the autism diagnosis and how each animation is directed towards a particular autism construct. Figs. 2-5 which illustrate a displayed character whose face changes its gaze direction from one object or event to another depending on which construct is being tested. These paragraphs and figures in combination are considered sufficient to at least suggest the face of the visual stimulus changing its gaze direction from one object to another since the characters of different constructs gaze at different objects or events and each of the constructs are displayed to the patient. Furthermore, paragraphs 0076-0079 teach that the animation can be adjusted based on subject parameters and thus it would seem that the particular number of objects being displayed and the number of objects that the character focuses on and their particular order may be subject to routine optimization and experimentation to optimize the test for the particular, age, gender, sex, and/or IQ of the subject).
Regarding claim 13, Lahvis discloses a method for testing visual processing capabilities of a subject (Abstract), comprising
displaying a visual stimulus picture to the subject (Paragraph 0024: display system to display animations);
performing eye tracking to record gaze data of the subject during display of the visual stimulus picture, the gaze data including at least one fixation time or dwell time associated with the gaze of the subject to an area in the visual stimulus picture (Paragraphs 0031, 0036, and 0045: image capture devices to calculate the target of the subject’s gaze on the display and time calculations); and
evaluating the at least one fixation time or dwell time in the recorded gaze data by comparing the at least one fixation time or dwell time to a reference value (Paragraph 0066: the subject’s responses are compared to threshold values from control subjects; Paragraphs 0045-0047: the subject responses include dwell and latency) wherein the at least one fixation time or dwell time indicates an autism status of the subject (Paragraph 0046: the amount of time a subject’s gaze location on the display is within a predetermined region may be used as a metric for autism assessment; Paragraph 0071-0073: evaluations performed with the data).
Lahvis further teaches that the displayed object and gaze data associated thereto may be in the left hemifield of the subject’s vision (Figs. 2-5 the various objects on the left side of the screen), but fails to disclose any emphasis or acknowledgment of left hemisphere measurements and thus is not considered to teach the limitation of “wherein the at least one fixation time or dwell time is associated with the subject viewing the area in the visual stimulus picture located in the left hemifield of the subject’s visual field” as interpreted in light of the above presented 35 USC 112(b) rejections above.
Lahvis is thus considered to fail to disclose: the method wherein the at least one fixation time or dwell time is associated with the subject viewing the area in the visual stimulus picture located in the left hemifield of the subject’s visual field
Dundas teaches that typically developing children have a left visual field bias and that children with autism do not present this same bias. Dundas teaches that determining if the subject presents a left visual field bias or not may be a discriminating parameter for autism diagnosis (Results: paragraphs 2; Discussion paragraphs 1 and 6). Dundas teaches that the visual field bias was performed on the display of faces (Data Reduction: Paragraph 1; Fig. 2) which are displayed in front the user (Procedure: paragraph 1) and thus the left side of the face is in the left hemifield and the right side of the face is in the right hemifield. Dundas further contemplates that visual field bias may be present for stimuli other than faces (Discussion: Paragraph 7). Thus, Dundas teaches that left visual field bias is a discriminating parameter between typically developing patients and patients with autism.
It would have been obvious to one of ordinary skill in the art prior to the effective filling date of the invention to configure the method of Lahvis to display centered faces and consider left visual field bias for facial processing and other stimuli as taught by Dundas because Dundas teaches that visual field bias may be a discriminating parameter between typical and autism patients and thus considering visual field biases may improve autism detection accuracy.
Regarding claim 16, Lahvis in view of Dundas teaches the method of claim 13. Modified Lahvis further discloses the method wherein the at least one fixation time or dwell time is associated with the subject viewing an area located in a region of an image of one or more objects or events depicted in the visual stimulus picture, wherein a face depicted in the visual stimulus picture has a gaze direction towards one of the one or more objects or events (Paragraphs 0043-0047: the time parameters are associated with the subject fixating on various regions of interest; Paragraph 0051: the attention of the character on the object; Fig. 2 references 210, 206, and 212).
Regarding claims 9 and 17, Lahvis in view of Dundas teaches the device of claim 1 and the method of claim 13 respectively. Modified Lahvis further suggests the device and method wherein the display unit is configured to consecutively display a plurality of visual stimulus pictures to the subject, wherein each of the plurality of visual stimulus pictures is displayed to the subject for a predetermined duration.
This limitation is at least suggested by a combination of paragraph 0045 which describes that the animation is a consecutive display of different frames, and that specific durations of the animation are set where specific objects are identified as salient; paragraph 0077 which describes that regions of interest may be designated for specific timeframes, and Figs. 2-5. These combination of these recitations with the figures at least suggest that the specific durations an object is identified as salient correspond to specific durations of a particular stimulus within the animation such as a duration of a character focusing on the object as depicted in Fig. 2.
Regarding claim 18, Lahvis in view of Dundas teaches the method of claim 13. Modified Lahvis further discloses the method further comprising predicting the autism status of the subject (Paragraphs 0041 0046-0047: the results may be processed to generate quantitative autism assessment metrics compared to predetermined thresholds; Paragraph 0067: autism condition may be indicated; Paragraphs 0073-0074: classify subjects that share an ASD diagnosis).
Regarding claim 19, Lahvis in view of Dundas teaches the method of claim 18. Modified Lahvis further discloses the method wherein the autism status of the subject is predicted based on automatically comparing the evaluated at least one fixation time or dwell time in the recorded gaze data to standard values (Paragraphs 0041 0046-0047: the results may be processed to generate quantitative autism assessment metrics compared to predetermined thresholds; Paragraphs 0066-0067: autism condition may be indicated by comparing subject response to control responses; Paragraphs 0073-0074: classify subjects that share an ASD diagnosis; Figs. 7-8).
Claims 11 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Lahvis US Patent Number US 2017/0188930 A1 hereinafter Lahvis in view of Dundas “A Lack of Left Visual Field Bias when Individual with Autism Process Faces” published 10/11/2011, by Journal of Autism and Developmental Disorders, pages 1104-1111, as applied to claims 1 and 18 above and further in view of Yoo US Patent Number US 2022/0392080 A1 hereinafter Yoo
Regarding claims 11 and 20, Lahvis in view of Dundas teaches the device of claim 1 and the method of claim 18 respectively. Modified Lahvis further discloses the device and method, wherein the processing unit is configured to predict the autism status of the subject based on the evaluation of the at least one fixation time or dwell time in the recorded gaze data (Paragraphs 0041 0046-0047: the results may be processed to generate quantitative autism assessment metrics compared to predetermined thresholds; Paragraph 0067: autism condition may be indicated; Paragraphs 0073-0074: classify subjects that share an ASD diagnosis). Lahvis further discloses the use of a machine learning algorithm to classify facial expressions (Paragraph 0037).
Lahvis fails to further disclose the prediction being performed by a machine learning algorithm
Yoo teaches a method for supporting an attention test based on an attention map and an attention movement map. The method includes generating a score distribution for each segment area of frames satisfying preset conditions, among frames of video content (video) that is produced in advance so as to be suitable for the purpose of a test, generating an attention map corresponding to the frames based on the distribution of the gaze point of a subject, generating an attention movement map corresponding to the frames based on information about movement of the gaze point of the subject, and calculating the attention of the subject using the score distribution for each segment area, the attention map, and the attention movement map (Abstract). Thus, Yoo falls within the same field of endeavor as Applicant’s invention.
Yoo teaches that the attention map may be used for screening and diagnosis and that deep learning, classifiers, or other algorithms may be used to support the screening or diagnosis (Paragraphs 0103-0104).
It would have been obvious to one of ordinary skill in the art prior to the effective filling date of the invention to implement the machine learning algorithms for diagnosis as taught by Yoo into the device and method of modified Lahvis for performing the autism status prediction because machine learning algorithms are exceptional at identifying and classifying data based on patterns and may improve the classification accuracy of Lahvis when performing the ASD classification (Lahvis: paragraph 0073).
Response to Arguments
Applicant’s arguments and amendments with respect to the rejections previously presented under 35 USC 112 have been fully considered and overcome some but not all of the previously presented grounds of rejection. New grounds of rejection have been necessitated by Applicant’s amendments.
In particular the output of the evaluation and how it relates to the device and method remain unclear. Additionally, applicant has not described how the specification supports the full scope of claim 18 or how the specification describes the structure and/or operation of the machine learning algorithms.
Applicant’s arguments with respect to the rejections presented under 35 USC 102 of claims 1 and 13 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Applicant’s arguments directed towards the rejections presented under 35 USC 101 have been fully considered but are not found to be persuasive.
Applicant argues that the amended claim is directed towards a specific, practical device and that the steps of performing eye tracking, displaying visual stimuli, recording dwell and fixation time, and evaluating the fixation and dwell time to determine autism status are not capable of being performed in the human mind and thus not an abstract idea.
This argument is not found to be persuasive because the display of visual stimuli and collection of eye tracking data are addressed as extrasolution activity to the abstract idea (i.e. data gathering using routine and conventional sensors and equipment) the abstract idea is the evaluation of the gathered data which is a simple threshold comparison which is readily performed in the human mind. Additionally, the present claims 1 and 13 do not indicate that the evaluation determines an autism status as described in the above 35 USC 112b rejection. Even if such a determination was made, it would be considered part of the abstract idea as a simple indication of autism presence when the measured value falls above or, alternatively, below the threshold value. The claims are considered to be directed towards the abstract idea of autism determination using data gathered from routine and conventional sensors.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MATTHEW ERIC OGLES whose telephone number is (571)272-7313. The examiner can normally be reached M-F 8:00AM - 5:30PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jason Sims can be reached on Monday-Friday from 9:00AM – 4:00PM at (571) 272 – 7540. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MATTHEW ERIC OGLES/Examiner, Art Unit 3791
/JASON M SIMS/Supervisory Patent Examiner, Art Unit 3791