Prosecution Insights
Last updated: April 19, 2026
Application No. 18/009,798

SYSTEM AND METHOD FOR CHARACTERIZING DROOPY EYELID

Non-Final OA §101§103§112
Filed
Dec 12, 2022
Examiner
TUCKER, WESLEY J
Art Unit
2661
Tech Center
2600 — Communications
Assignee
Mor Research Applications Ltd.
OA Round
3 (Non-Final)
83%
Grant Probability
Favorable
3-4
OA Rounds
3y 1m
To Grant
90%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
596 granted / 715 resolved
+21.4% vs TC avg
Moderate +6% lift
Without
With
+6.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
19 currently pending
Career history
734
Total Applications
across all art units

Statute-Specific Performance

§101
12.3%
-27.7% vs TC avg
§103
35.7%
-4.3% vs TC avg
§102
39.4%
-0.6% vs TC avg
§112
8.3%
-31.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 715 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on December 19th 2025 has been entered. Response to Amendment Applicant’s amendment filed December 19th 2025 has been entered and made of record. Claims 1, 5, 16-18, 2224-25, 31 and 33 are amended. New Claim 34 is added. Claims 4, 6-8, 10-14, 20-21, 26, 29-30 and 32 are cancelled. Claims 1-3, 5, 9, 15-19, 22-25, 27-28, 31 and 33-34 are pending. Applicant’s remarks in view of the newly presented amendments have been considered but are not found to be persuasive for at least the following reasons: 101 Rejection – Abstract Idea Applicant argues that the amendment of processing the eye-related image data and the non-eye-related image data for determining whether the upper eyelid determined as droopy is due to patient malingering or not. “amounts to significantly more, and confers a meaningful limit extending beyond the practice of a merely alleged abstract idea.” Examiner disagrees, there is no recitation of any specific processing or any specialized processor or machine (indeed in claim 1 there is no processor of any kind recited). The independent claims are still recited so broadly as to read on the human process of looking at an image and making a mental decision. The 101 Abstract Idea Judicial Exception Rejection is accordingly maintained. 112(b) Rejection – Indefinite Applicant’s remarks are not found to be persuasive. Applicant points to paragraphs [0023] and [0033]: “[0033] In some embodiments, the ROI may not only include the patient's eye or eyes, but also additional portions of the patient face such as the forehead, nose, cheek, etc., for example, to capture and analyze the patient's facial muscle movement.” The term “non-eye-related” does not appear in the specification. The question remains, which non-eye-related ROI is used to determine patient malingering? There is no recitation of which non-eye-related region is used or how it might be used to determine patient malingering. Is it the nose, cheek and/or forehead? Which ROI is useful for determining patient malingering and how? The same discussion applies to “eye-related image data.” Applicant refers to paragraph[0045]: "one or more geometric feature of the patient's eye(s). Such features include, inter alia, pupil diameter, pupil area, pupil curvature, center C of pupil 20 and/or a feature of eyelid 10 such as, for example, lower central edge of the upper eyelid 12, pupillary distance, and/or the like." Which example eye features are used in the determination of patient malingering, and how are the features used in that determination? There is no recitation in the specification as to how eye-related image data and non-eye-related image data are processed to determine patient malingering. The 112b rejection is accordingly maintained. 103 Rejection in view of Utsugi and Incesu Applicant argues that the combination of Utsugi and Incesu do not teach the independent claims as recited. Examiner disagrees. Utsugi teaches receiving image data of both eye data (paragraphs [0126]-[0128] and [0136]), as well as eyebrow data or non-eye-related facial image data (paragraph [0137]) for the purpose of determining a drooping eyelid and a judgement of elevate eyebrow. Incesu teaches methods for determining malingering in ophthalmology (see page 708 "Abstract") and teaches that in order to detect voluntary generated ptosis (droopy eyelid) ipsilateral eyebrow depression is observed and if ipsilateral eyebrow depression present, it is a case of malingering (see page 715, column 1, 7th paragraph "Ptosis"). Incesu teaches thus that in order to detect malingering of a droopy eyelid one should take into consideration the position of the eyebrow. The eyebrow is interpreted as "non-eye related facial feature" as there is no recited definition in Applicant's specification or what is considered to be a "non-eye related facial feature." Therefore it would have been obvious to one of ordinary skill in the art before time of filing to use the malingering determination taught by Incesu based on eyebrow position, in combination with the eyebrow determination and droopy eyelid determination of Utsugi in order to accurately predict the presence of malingering of the user eyelid. The rejection in view of Utsugi and Incesu is accordingly maintained. 112(a) Enablement Rejection A new 112(a) enablement rejection is also presented. The independent claims recite that eye-related and non-eye-related image data are processed to determine patient malingering or not. However, there is no disclosure in the specification of how eye-related and non-eye-related facial image data are processed to arrive at a determination of malingering. The specification describes the details of determining malingering at paragraph [0016]. “[0016] In some examples, the evaluation system may employ a rule-based engine and/or artificial intelligence functionalities which are based, for example, on a machine learning model. The machine learning model may be trained by a multiplicity of facial image data of patients. The rule-based engine and/or the machine learning model are configured to determine whether a patient is malingering a droopy eyelid or not. In some embodiments, the rule-based engine and/or the machine learning model are configured to distinguish, based in the patient's facial image data, between malingered or non-malingered vision-impairing droopy eyelids.” There is no description of how exactly eye-related and non-eye-related image data are processed to determine malingering. The specification only describes training a learning model on examples of images. Additionally malingering determination is recited in paragraphs [0023]-[0024]: “[0023] In some embodiments, additional facial features may be captured by a camera and processed to determine, for example, whether the patient is trying to exaggerate droopy upper eyelid to malinger vision-impairing. Such facial features can pertain, for example, comparison to the patient's other eye, facial expressions and/or movements of the patient's mouth, eyebrows, cheekbones and/or forehead. [0024] For instance, patient malingering (or lack thereof) may for example be detected by an evaluation system based on artificial intelligence functionalities which are based on a machine learning model (e.g., an artificial neural network and/or other deep learning machine learning models; regression-based analysis; a decision tree; and/or the like), and/or by a rule-based engine. For example, the evaluation system may be configured to analyze image data descriptive of a patient's facial muscle features, muscle activation, facial expressions, and/or the like, and provide an output indicating whether the patient is malingering a vision-impairing droopy eyelid, or not. For example, a machine learning model may be trained with images of video sequences of facial expressions, labelled by an expert either as "malingering" or "not malingering". Again the only way determination of malingering is described is by applying a machine learning evaluation system. Claim 15 recites that a machine learning system performs the determination of patient malingering, therefore the scope of claim 1 (and the other independent claims) covers embodiments wherein a machine learning model is NOT present. There is no disclosure in the specification of an embodiment wherein the malingering is performed without a machine learning model, therefore Claim 1 and claims that do not contain the machine learning model are not enabled by the specification. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1, 16 and 31 are rejected under 35 U.S.C. 101 because the claimed invention is directed to two steps of receiving image data descriptive of one or more eye-related features and one or more non-eye related facial features without significantly more. The claims recite only three steps as follows: receiving eye-related image data descriptive of one or more eye-related features of the subject; receiving non-eye-related image data descriptive of one or more non-eye related facial features of the subject; and processing the eye-related image data and the non-eye-related image data for determining whether a droopy upper eyelid is due to patient malingering or not. This judicial exception is not integrated into a practical application because there is no recitation of a determination actually being made as to the eyelid being droopy or the eyelid droopiness being due to patient malingering or not. The final step is processing the image data FOR determining malingering or not. There is no recitation of any specific image processing or any specific image features for possible image processing determinations. The claim therefore reads on collecting/receiving/gathering image data because the language is so broad as to read on a person looking at image data or looking at an image. The claim language reads on the process of a person mentally/manually looking at an image of a face and “processing” or mentally evaluating the image data with regard to patient malingering. The claims comprise two steps of receiving “image data descriptive of…” however there is no recitation of what the data is or how the data is descriptive. Accordingly the image data is interpreted as merely the image itself. There is no recitation of processing the received image data and therefore no practical application is recited. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because claim 1 only recites a method with no computer or processor required and claims 16 and 31 only recite generic computing components with no recitation of specialized computing components or how the computing components are used to implement the steps. Claim Rejections - 35 USC § 112(a) - Enablement The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1-3, 5, 9, 15-19, 22-25, 27-28, 31, 33-34 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the enablement requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to enable one skilled in the art to which it pertains, or with which it is most nearly connected, to make and/or use the invention. Independent claims 1, 16 and 31 recite that eye-related and non-eye-related image data are processed to determine patient malingering or not. However, there is no disclosure in the specification of how eye-related and non-eye-related facial image data are processed to arrive at a determination of malingering. The dependent claims are not enabled based on their dependency on the independent claims. The specification describes the details of determining malingering at paragraph [0016]. “[0016] In some examples, the evaluation system may employ a rule-based engine and/or artificial intelligence functionalities which are based, for example, on a machine learning model. The machine learning model may be trained by a multiplicity of facial image data of patients. The rule-based engine and/or the machine learning model are configured to determine whether a patient is malingering a droopy eyelid or not. In some embodiments, the rule-based engine and/or the machine learning model are configured to distinguish, based in the patient's facial image data, between malingered or non-malingered vision-impairing droopy eyelids.” There is no description of how exactly eye-related and non-eye-related image data are processed to determine malingering. The specification only describes training a learning model on examples of images. Additionally malingering determination is recited in paragraphs [0023]-[0024]: “[0023] In some embodiments, additional facial features may be captured by a camera and processed to determine, for example, whether the patient is trying to exaggerate droopy upper eyelid to malinger vision-impairing. Such facial features can pertain, for example, comparison to the patient's other eye, facial expressions and/or movements of the patient's mouth, eyebrows, cheekbones and/or forehead. [0024] For instance, patient malingering (or lack thereof) may for example be detected by an evaluation system based on artificial intelligence functionalities which are based on a machine learning model (e.g., an artificial neural network and/or other deep learning machine learning models; regression-based analysis; a decision tree; and/or the like), and/or by a rule-based engine. For example, the evaluation system may be configured to analyze image data descriptive of a patient's facial muscle features, muscle activation, facial expressions, and/or the like, and provide an output indicating whether the patient is malingering a vision-impairing droopy eyelid, or not. For example, a machine learning model may be trained with images of video sequences of facial expressions, labelled by an expert either as "malingering" or "not malingering". Again the only way determination of malingering is described is by applying a machine learning evaluation system. Claim 15 recites that a machine learning system performs the determination of patient malingering, therefore the scope of claim 1 (and the other independent claims) covers embodiments wherein a machine learning model is NOT present. There is no disclosure in the specification of an embodiment wherein the malingering is performed without a machine learning model, therefore Claim 1 and claims that do not contain the machine learning model are not enabled by the specification. Claim 15 is accordingly not enabled because even though claim 15 recites machine learning for determining malingering, claim 1 recites processing eye-related and non-eye-related image data, which is not how the machine learning model is recited to operate in the specification. Claim Rejections - 35 USC § 112(b) - Indefinite The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. The terms “eye-related” and “non-eye related” in claims 1, 16 and 31 are relative terms which renders the claim indefinite. The terms “eye-related” and “non-eye related”” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. Applicant refers to paragraphs [0014], [0023], [0033], [0039] and [0044] in the specification as providing support for the amendments. There is no definition of the terms “eye-related” and “non-eye related” in the specification. [0014] Embodiments pertain to a droopy upper eyelid evaluation system operative to differentiate between aesthetical and vision impairing ptosis by identifying, for subject (also: patient) a degree unaided prolapse of the upper eyelid, based on facial image data of the patient. In some embodiments, the droopy upper eyelid evaluation system is configured and/or operable to automatically or semi-automatically evaluate or characterize, based on facial image data of the patient, a droopy eyelid condition for one or both eyes of a patient, simultaneously or separately. Is the upper eyelid “eye-related” or “non-eye related”? [0023] In some embodiments, additional facial features may be captured by a camera and processed to determine, for example, whether the patient is trying to exaggerate droopy upper eyelid to malinger vision-impairing. Such facial features can pertain, for example, comparison to the patient's other eye, facial expressions and/or movements of the patient's mouth, eyebrows, cheekbones and/or forehead. There is no definition of what features are eye-related and non-eye related. Is the patient’s other eye “eye-related” or “non-eye related”? Are eyebrows “eye-related” or “non-eye related”? Examiner submits that eyebrows, by their very name, are “eye-related” however it is unclear if Applicant intends an eyebrow to be eye-related or “non-eye related facial feature.” [0033] In some embodiments, the ROI may not only include the patient's eye or eyes, but also additional portions of the patient face such as the forehead, nose, cheek, etc., for example, to capture and analyze the patient's facial muscle movement. Capturing images of the patient's face may facilitate determining whether a patient's attempts to malinger or fake vision-impairing droopy eyelid, or not. In some examples, the ROI may also include non-facial portions, for instance, to capture a patient's body posture, which may also provide an indication whether a patient attempts to malinger vision- impairing droopy eyelid or not. Which muscles indicate malingering? Are there nose muscles that indicate malingering? Is the eyelid recited considered “eye-related” or “non-eye related”? [0039] In some embodiments, a droopy upper eyelid evaluation system 100 may provide a user of the system with indicators (e.g., visual and/or auditory) regarding a desired patient head orientation and, optionally, body posture, relative to camera 122 during the capturing of images of one or more facial features of the patient. For example, droopy upper eyelid evaluation system 100 may provide reference markings to indicate a desired yaw, pitch and/or roll orientation of the patient's head relative to camera 122. Capturing facial features at a desired head orientation may for example reduce, minimize or eliminate the probability of false positives (i.e., that the droopy eyelid is vision impairing) and/or of false negatives (that droopy eyelid is not vision impairing). There is no definition of what “eye-related” or “non-eye related” facial features are recited. Is the orientation of the person’s head a “non-eye related” facial feature? [0044] In step 420, the system identifies eye-related and, optionally, additional facial features. For example, the system identifies the iris 30 and pupil 20 as shown in FIG. 1 within the image using image or facial feature recognition techniques, which may be rule-based and/or based on machine learning models or algorithms. There is no definition of what features are eye-related and non-eye related. Which additional facial features are considered “non-eye related” and which features are eye-related? Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-3, 5, 16-19, 27, 31 and 34 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of USPN 2001/056228 to Utsugi et al. and Applicant cited publication (IDS filed 1/30/2024) titled “Test for malingering in ophthalmology.” By Incesu. With regard to claim 1, Utsugi discloses a method for characterizing for a subject, droopy upper eyelid (Fig. 1, computers T, T1 and T2), the method comprising: receiving eye-related image data (Fig. 1, cameras C and paragraph [0076] and [0080]), descriptive of one or more eye-related features of the subject (paragraphs [0126]-[0128] and [0132]-[0136] and Figs. 4 and 7, The eye image is processed and judged in order to perform a diagnosis of drooping eyelid and the severity such as if the pupil is hidden and vision impairment is likely). Utsugi does not explicitly disclose receiving non-eye-related image data descriptive of one or more non-eye related facial features of the subject; and processing the eye-related image data and the non-eye-related image data for determining whether the upper eyelid determined as droopy is due to patient malingering or not. Utsugi does however teach that the system is capable of conducting a more accurate judgment as to the drooping eyelid based on the presence/absence of the elevated eyebrows (paragraph [0137]). Incesu teaches methods for determining malingering in ophthalmology (see page 708 "Abstract") and teaches that in order to detect voluntary generated ptosis (droopy eyelid) ipsilateral eyebrow depression is observed and if ipsilateral eyebrow depression present, it is a case of malingering (see page 715, column 1, 7th paragraph "Ptosis"). Incesu teaches thus that in order to detect malingering of a droopy eyelid one should take into consideration the position of the eyebrow. The eyebrow is interpreted as “non-eye related facial feature” as there is no recited definition in Applicant’s specification or what is considered to be a “non-eye related facial feature.” Therefore it would have been obvious to one of ordinary skill in the art before time of filing to use the malingering determination taught by Incesu based on eyebrow position, in combination with the eyebrow determination and droopy eyelid determination of Utsugi in order to accurately predict the presence of malingering of the user eyelid. With regard to claim 2, Utsugi discloses the method of claim 1, further comprising providing an output indicating whether the droopy upper eyelid is vision impairing or not, or whether the droopy upper eyelid is more likely vision impairing than not vision-impairing (paragraphs [0127]-[0128] and [0132]-[0136] and Figs. 4 and 7, The eye image is processed and judged in order to perform a diagnosis of drooping eyelid and the severity such as if the pupil is hidden and vision impairment is likely). With regard to claim 3, Utsugi discloses the method of claim 1, wherein the determining includes identifying at least one geometric feature of the pupil for determining, based on the at least one geometric feature, whether the droopy upper eyelid is vision impairing or not, or whether the droopy upper eyelid is more likely vision impairing than not vision-impairing (paragraphs [0127]-[0128] and [0132]-[0136] and Figs. 4 and 7, The eye image is processed and judged in order to perform a diagnosis of drooping eyelid and the severity such as if the pupil is hidden and vision impairment is likely, The geometric feature in this case is the location of the eyelid in relation to the pupil). With regard to claim 5, Utsugi discloses the method of claim 3, wherein the least one geometric feature of the pupil is the pupil curvature (paragraphs [0127]-[0128] and [0132]-[0136] and Figs. 4 and 7, The eye image is processed and judged in order to perform a diagnosis of drooping eyelid and the severity such as if the pupil is hidden and vision impairment is likely. The geometric feature in this case is the location of the eyelid in relation to the pupil and the pupil curvature as seen in Figs. 6A-6C). With regard to claim 16, Utsugi discloses a system for characterizing, for a subject, a droopy upper eyelid, the system comprising: at least one processor (Fig. 1, computer T); and at least one memory storing software code portions executable by the at least one processor to cause the system to perform (Fig. 1, computer T) any one or more of the following steps: receiving image data descriptive of one or more eye-related features of the subject (paragraphs [0127]-[0128] and [0132]-[0136] and Figs. 4 and 7, The eye image is processed and judged in order to perform a diagnosis of drooping eyelid and the severity such as if the pupil is hidden and vision impairment is likely. Fig. 1, display and paragraph [0180], A diagnosis of drooping eyelid and severity is displayed and output to a user). Utsugi does not explicitly disclose receiving non-eye-related image data descriptive of one or more non-eye related facial features of the subject; and processing the eye-related image data and the non-eye-related image data for determining whether a droopy upper eyelid is due to patient malingering or not Incesu teaches methods for determining malingering in ophthalmology (see page 708 "Abstract") and teaches that in order to detect voluntary generated ptosis (droopy eyelid) ipsilateral eyebrow depression is observed and if ipsilateral eyebrow depression present, it is a case of malingering (see page 715, column 1, 7th paragraph "Ptosis"). Incesu teaches thus that in order to detect malingering of a droopy eyelid one should take into consideration the position of the eyebrow. The eyebrow is interpreted as “non-eye related facial feature” as there is no recited definition in Applicant’s specification or what is considered to be a “non-eye related facial feature.” Therefore it would have been obvious to one of ordinary skill in the art before time of filing to use the malingering determination taught by Incesu based on eyebrow position, in combination with the eyebrow determination and droopy eyelid determination of Utsugi in order to accurately predict the presence of malingering of the user eyelid. With regard to claim 17, Utsugi discloses wherein the determining includes identifying at least one geometric feature of the pupil for determining, based on the at least one geometric feature, whether the droopy upper eyelid is vision impairing or not, or whether the droopy upper eyelid is more likely vision impairing than not vision- impairing (paragraphs [0127]-[0128] and [0132]-[0136] and Figs. 4 and 7, The eye image is processed and judged in order to perform a diagnosis of drooping eyelid and the severity such as if the pupil is hidden and vision impairment is likely. The geometric feature in this case is the location of the eyelid in relation to the pupil and the pupil curvature as seen in Figs. 6A-6C). With regard to claim 18, Utsugi discloses wherein the least one geometric feature of the pupil comprises the following: the pupil curvature, the pupil diameter, a pupil area visible in the image; or any combination of the aforesaid (paragraphs [0127]-[0128] and [0132]-[0136] and Figs. 4 and 7, The eye image is processed and judged in order to perform a diagnosis of drooping eyelid and the severity such as if the pupil is hidden and vision impairment is likely, The geometric feature in this case is the location of the eyelid in relation to the pupil and the pupil curvature as seen in Figs. 6A-6C. See also paragraphs [0159], [0162], The pupil diameter and change are determined as well as the area of the pupil). With regard to claim 19, Utsugi discloses wherein the at least one geometric feature of the pupil includes a pupil area visible in the image (paragraphs [0127]-[0128] and [0132]-[0136] and Figs. 4 and 7, The eye image is processed and judged in order to perform a diagnosis of drooping eyelid and the severity such as if the pupil is hidden and vision impairment is likely, The geometric feature in this case is the location of the eyelid in relation to the pupil as seen in Figs. 6A-6C). With regard to claim 27, the discussion of claims 1 and 16 apply. Incesu teaches methods for determining malingering in ophthalmology (see page 708 "Abstract") and teaches that in order to detect voluntary generated ptosis (droopy eyelid) ipsilateral eyebrow depression is observed and if ipsilateral eyebrow depression present, it is a case of malingering (see page 715, column 1, 7th paragraph "Ptosis"). Incesu teaches thus that in order to detect malingering of a droopy eyelid one should take into consideration the position of the eyebrow. Claim 31 is rejected under 35 U.S.C. 103 as being unpatentable over the combination of USPN 2001/056228 to Utsugi et al. and Applicant cited publication titled “Test for malingering in ophthalmology.” by Incesu. With regard to claim 31, Utsugi discloses a system for identifying vision-impairing droopy eyelid, the system comprising a processor, memory, and one or more code sets stored in the memory and executed in the processor for performing: receiving eye-related image data captured by a camera descriptive of one or more eye-related features (paragraphs [0127]-[0128] and [0132]-[0136] and Figs. 4 and 7, The eye image is processed and judged in order to perform a diagnosis of drooping eyelid and the severity such as if the pupil is hidden and vision impairment is likely. Fig. 1, display and paragraph [0180], A diagnosis of drooping eyelid and severity is displayed and output to a user); Utsugi does not explicitly disclose receiving non-eye-related image data captured by a camera descriptive of one or more non-eye related facial features; processing the eye-related image data and the non-eye-related image data for determining whether a droopy upper eyelid is due to patient malingering or not Incesu teaches methods for determining malingering in ophthalmology (see page 708 "Abstract") and teaches that in order to detect voluntary generated ptosis (droopy eyelid) ipsilateral eyebrow depression is observed and if ipsilateral eyebrow depression present, it is a case of malingering (see page 715, column 1, 7th paragraph "Ptosis"). Incesu teaches thus that in order to detect malingering of a droopy eyelid one should take into consideration the position of the eyebrow. The eyebrow is interpreted as “non-eye related facial feature” as there is no recited definition in Applicant’s specification or what is considered to be a “non-eye related facial feature.” Therefore it would have been obvious to one of ordinary skill in the art before time of filing to use the malingering determination taught by Incesu based on eyebrow position, in combination with the eyebrow determination and droopy eyelid determination of Utsugi in order to accurately predict the presence of malingering of the user eyelid. With regard to claim 34, Utsugi discloses wherein the system is further configured for: receiving an image data descriptive of one or more eye-related features imaged under different lighting conditions, and/or imaged in the visible and/or infrared wavelength range (paragraphs [0159] and [0165], Utsugi discloses that light is used in a deliberate way to image the subjects eyre-related features such as the pupil with regard to depression diagnosis as it pertains to the detected drooping eyelid); determining, based on the received image data, whether a droopy eyelid is due to patient malingering or not (the discussion of claim 1 applies). Claims 9, 15, 22-25, 28 and 33 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of USPN 2001/056228 to Utsugi et al. and Applicant cited publication titled “Test for malingering in ophthalmology.” by Incesu. and further in view of USPN 2019/0370959 to Krishna et al. With regard to claim 9, Utsugi and Incesu disclose the method of claim1, but do not explicitly disclose determining a distance D between a center C of the pupil and a feature of the upper eyelid. Krishna discloses an eyelid droop diagnosis similar to that of Utsugi and Incesu (paragraphs [0004], [0025] and [0029]) and further teaches determining several distances between a pupil and features of the upper eyelid (See Figs. 4 and 7 and paragraphs [0010], [0013], [0021], and [0024]-[0025]). Therefore it would have been obvious to one of ordinary skill in the art before time of filing to use the distance calculations taught by Krishna in combination with the eyelid position determination of the Utsugi in order to determine exact position of the eyelid related to the pupil for accurate diagnosis of eyelid droop. With regard to claim 15, Utsugi and Incesu disclose the method of claim 1, but do not disclose wherein the characterizing of the vision-impairing droopy eyelid as the result of patient malingering or not, is performed by a machine learning model. Krishna discloses a similar droopy eyelid diagnosis to that of Utsugi and Incesu and further teaches using a machine learning models is used in the detection and identification of facial landmarks (paragraph [0037]). Therefore it would have been obvious to one of ordinary skill in the art before time of filing to use a machine learning model as taught by Krishna to identify facial landmarks for use in the eyelid droop diagnosis of Utsugi and Incesu in order to learn overtime and more accurately diagnose droopy eyelid. With regard to claim 22, the discussion of claim 9 applies. With regard to claim 23, Krishna discloses wherein the feature of the upper eyelid is the lower central edge of the upper eyelid (Fig. 7). With regard to claim 24, the discussion of claim 11 applies. With regard to claim 25, Krishna discloses wherein the position of center C of the pupil in a captured image frame may be determined based on light reflected from the pupil (Fig. 7 and paragraph [0024], The pupil determination is made using reflected light from corneal light reflex 705). With regard to claim 28, the discussion of claim 15 applies. With regard to claim 33, Utsugi is concerned with head position as shown in Fig. 2, but does not explicitly disclose the step of configured to provide, based on the captured image, indicators regarding a desired head orientation and body posture of the patient relative to the camera for capturing, under the desired head orientation and/or the desired body posture, additional non-eye related and/or eye-related image data to reduce false-positives and/or false-negatives regarding the determination of droopy eyelid malingering by the subject.. Krishna teaches that facial landmarks are determined and the pose of the face/head orientation relative to the camera (paragraphs [0036]-[0038]). The process of determining the relative orientation of an imaged face relative to the camera is commonplace in the art. Therefore one of ordinary skill in the art before time of filing to use the facial/head orientation determination taught by Krishna in combination with the facial feature identification of Utsugi in order to determine an accurate measurement and best mode of capturing the facial image of the facial features in the process of making a droopy eyelid determination. Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to WESLEY J TUCKER whose telephone number is (571)272-7427. The examiner can normally be reached 9AM-5PM Monday-Friday. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, JOHN VILLECCO can be reached on 571-272-7319. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /WESLEY J TUCKER/Primary Examiner, Art Unit 2661
Read full office action

Prosecution Timeline

Dec 12, 2022
Application Filed
Feb 20, 2025
Non-Final Rejection — §101, §103, §112
Jun 22, 2025
Response Filed
Aug 19, 2025
Final Rejection — §101, §103, §112
Dec 19, 2025
Request for Continued Examination
Jan 06, 2026
Response after Non-Final Action
Feb 06, 2026
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597221
IMAGE PROCESSING APPARATUS AND ELECTRONIC APPARATUS
2y 5m to grant Granted Apr 07, 2026
Patent 12597222
METHOD AND SYSTEM FOR DETERMINING A REGION OF WATER CLEARANCE OF A WATER SURFACE
2y 5m to grant Granted Apr 07, 2026
Patent 12592057
SYSTEM AND METHOD FOR DETECTING AND CLASSIFYING RETINAL MICROANEURYSMS
2y 5m to grant Granted Mar 31, 2026
Patent 12585939
SYSTEMS AND METHODS FOR DISTRIBUTED DATA ANALYTICS
2y 5m to grant Granted Mar 24, 2026
Patent 12586410
Method and Device for Dynamic Recognition of Emotion Based on Facial Muscle Movement Monitoring
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
83%
Grant Probability
90%
With Interview (+6.1%)
3y 1m
Median Time to Grant
High
PTA Risk
Based on 715 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month