Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claims 1-6 are pending. Claims 1-6 are rejected herein.
Priority
This application claims priority to provisional application 63/452,378. The priority date is 15 March 2023.
Distinguishable Subject Matter
Portions of the claim 1 are subject matter free of prior art, including the following limitations:
displaying a prompt in the client device for a user to login or sign up to access the story-led monthly eye test;
login using an email or social media data associated with the user to access a platform providing the story-led monthly eye test;
Portions of claim 3 are subject matter free of prior art, including the following limitations:
displaying a story associated with measuring the eye data;
generating a timer indicating when the user can access the test and the story;
in response to the timer, providing a prompt with link corresponding to the story and test;
and in response to selecting the link, accessing a new chapter of the story along with the test.
In both cases, the prior art lacks this depth and detail concerning user interaction/user interfaces associated specifically with eye examinations/tests mediated via stories.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 2 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 2 recites “…and providing, based on the measured distance, a visual indication on a display of the client device.” Prior to this claim limitation, in this claim, two distances are measured. It is currently unclear which measured distance the visual indication on the display of the client device is based on the IPD distance or the distance between the client device and the user. As such the claim is indefinite and rejected for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor regards as the invention.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-6 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more.
Claims 1-6 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1: Statutory Categories
The claim recites a method for managing a story-led monthly eye test that monitors eye diseases, a method for testing user for an eye disease, a method for measuring eye data on a client device, a method for managing eye data, a method for measuring visual function and a method for measuring eye data. All are within a statutory category for subject matter eligibility purposes.
Step 2A Prong One: Practical Application
The limitations of (claim 1 being representative) receiving a link […] from an eye care specialist in response to a diagnosis of the eye disease for tracking the progression of the eye disease; displaying a prompt […] for a user to login or sign up to access the story-led monthly eye test; […] access a platform providing the story-led monthly eye test; receive input associated with user information; sending the results to the eye care specialist; generating […] the test results; displaying an alert, based on the test results exceeding a threshold; and in response to the alert, sending a request from the eye care specialist to schedule video or in-person appointment as drafted, is a process that, under the broadest reasonable interpretation, covers certain methods of organizing human activity (i.e., managing personal behavior including following rules or instructions) but for recitation of generic computer components. The limitations of (claim 2 being representative) receiving input from an administrator, based in part on a prompt for user information; in response to the user information, determine the user qualifies to take the test; select a screen size to display at least one image; launch a game used for testing the eye disease; accept a voice generated input from the user’ capture an image of the user to measure a pupillary distance; measure the distance between the user and an […entity…]; and providing, based on the measured distance, a visual indication […] as drafted, is a process that, under the broadest reasonable interpretation, covers certain methods of organizing human activity (i.e., managing personal behavior including following rules or instructions) but for recitation of generic computer components. The limitations of (claim 3 being representative) displaying at least three symbols […]; based on input from a user, decrease a size of at least one symbol of the at least three symbols […]; displaying at least one image associated with at least one color value; in response to the user identifying the at least one image, modifying the at least one color value associated with the image; displaying a story associated with measuring the eye data; generating a timer indicating when the user can access the test and the story; in response to the timer, providing a prompt with link corresponding to the story and test; and in response to selecting the link, accessing a new chapter of the story along with the test as drafted, is a process that, under the broadest reasonable interpretation, covers certain methods of organizing human activity (i.e., managing personal behavior including following rules or instructions) but for recitation of generic computer components. The limitations of (claim 4 being representative) generating a story […] to monitor the eye data associated with the first user; displaying account information including at least two profiles associated with the first user and a second user; accessing, based on input provided by the second user, eye data associated with the first user; displaying […] to a provider, […] eye data and at least two values associated with an axial length data associated with the first user, wherein the two values are associated with two separate points in time; and based on a difference between the at least two values associated with the axial length data and a threshold tolerance level, sending a link to at least the first user as drafted, is a process that, under the broadest reasonable interpretation, covers certain methods of organizing human activity (i.e., managing personal behavior including following rules or instructions) but for recitation of generic computer components. The limitations of (claim 5 being representative) displaying at least one image associated with at least one color value and a position on a […]; in response to a user identifying the at least one image; generating a second image with a modified color value at the modified position; receiving an input from a user associated with the at least second image; determining, based on in part on the input from the user, a contrast threshold; determining, based in part on a visual acuity model and the contrast threshold, at least one value associated with a contrast acuity; and returning to a display of a narration in progress, the narration associated with measuring visual function as drafted, is a process that, under the broadest reasonable interpretation, covers certain methods of organizing human activity (i.e., managing personal behavior including following rules or instructions) but for recitation of generic computer components. The limitations of (claim 6 being representative) receiving user input including login data or social media information; displaying account information including at least two profiles associated with the first user and a second user; launching a game, response to input received from the first user; accessing, based on input provided by the second user, eye data associated with the first user; displaying […data…] to a provider, […data…] including the eye data and at least two values associated with an axial length data associated with the first user, wherein the two values are associated with two separate points in time; and generating an alert […], based on the two values associated with an axial length data and a threshold value as drafted, is a process that, under the broadest reasonable interpretation, covers certain methods of organizing human activity (i.e., managing personal behavior including following rules or instructions) but for recitation of generic computer components.
That is, other than reciting a client device, platform, display of the client device the claimed invention amounts to managing personal behavior or interaction between people. For example, but for the additional elements identified, claim 1 encompasses a person logging into an account, accessing a story based eye test and sending the results to a specialist in the manner described in the identified abstract idea, supra. For example, but for these identified limitations, claim 2 encompasses a person providing a measure of a visual indication based on analysis of distances in an image in the manner described in the identified abstract idea, supra. For example, but for these identified limitations, claim 3 encompasses a person modifying a visual stimulus and displaying and accessing a story and test based on this data in the manner described in the identified abstract idea, supra. For example, but for these identified limitations, claim 4 encompasses a person modifying a visual stimulus and displaying and accessing a story and test based on this data in the manner described in the identified abstract idea, supra. For example, but for these identified limitations, claim 5 encompasses a person modifying a visual stimulus and displaying and accessing a story and test based on this data in the manner described in the identified abstract idea, supra. For example, but for these identified limitations, claim 6 encompasses a person receiving user input including login data or social media information, displaying account information with different profiles, launching a game, accessing eye data associated with the first user; displaying information to a provider, the information including the eye data and at least two values associated with an axial length associated with the first user and generating a certain type of alert in the manner described in the identified abstract idea, supra. The Examiner notes that certain “method[s] of organizing human activity” includes a person’s interaction with a computer (see MPEP 2106.04(a)(2)(II)). If a claim limitation, under its broadest reasonable interpretation, covers managing personal behavior or interactions between people but for the recitation of generic computer components, then it falls within the “certain methods of organizing human activity” grouping of abstract ideas. Accordingly, the claim recites an abstract idea.
Step 2A Prong Two: Practical Application
These judicial exceptions are not integrated into a practical application. In particular, the claim recites the additional elements of a client device, platform, display of the client device that implement the identified abstract idea. The client device, platform, display of the client device are not described by the applicant and are recited at a high-level of generality (i.e., a generic server performing generic computer functions) such that they amount to no more than mere instructions to apply the exception using a generic computer component. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea.
The claim further recites the additional element of an portal. The portal merely generally links the abstract idea to a particular technological environment or field of use. MPEP 2106.04(d)(I) indicates that generally linking an abstract idea to a particular technological environment or field of use cannot provide a practical application.
Step 2B: Significantly More
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of using the client device, platform and/or display of the client device to perform the noted steps amounts to no more than mere instructions to apply the exception using generic computer components. Mere instructions to apply an exception using generic computer components cannot provide an inventive concept (“significantly more”).
Claims 4 and 6 further recite the additional element of an portal. The portal merely generally links the abstract idea to a particular technological environment or field of use. MPEP 2106.04(d)(I) indicates that generally linking an abstract idea to a particular technological environment or field of use cannot provide a practical application or significantly more.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 5 is/are rejected under 35 U.S.C. 102(a)(1) and 102(a)(2) as being anticipated by US 2022/0160223 A1 (hereafter Bradley).
Regarding Claim 5
A method for measuring visual function, the method comprising: displaying at least one image associated with at least one color value and a position of a display of a client device; [Bradley teaches at para. [0014] the present specification discloses a method of evaluating a user's visual field using a head-mounted device configured to be positioned on the user's head, wherein the device comprises at least one processor, a display in data communication with the at least one processor, and a nontransient memory in data communication with the at least one processor and adapted to store programmatic instructions that, when executed, execute said method, the method comprising: generating a first plurality of visual stimuli, wherein the first plurality of visual stimuli is presented in a form of a grid defined by two or more vertical lines intersecting two or more horizontal lines, wherein the grid covers a first plurality of coordinate locations in the visual field, and wherein each of the first plurality of visual stimuli has at least one of a first plurality of characteristics; causing the first plurality of visual stimuli to be displayed on the display in accordance with its first plurality of characteristics; detecting a discrepancy based on a comparison of the first plurality of characteristics with a user's response that is indicative of the visual characteristics of the first plurality of visual stimuli experienced by the user; storing the detected discrepancy as a first set of data; using the first set of data to generate a second plurality of visual stimuli, wherein each of the second plurality of visual stimuli has at least one of a second plurality of characteristics and is associated with a second plurality of coordinate locations and wherein a number of the second plurality of coordinate locations is less than a number of the first plurality of coordinate locations; causing each of the second plurality of visual stimuli to be displayed on said display in accordance with its one of the second plurality of coordinate locations and one of the second plurality of characteristics; receiving responses from the user, wherein the responses are indicative of the visual characteristics of the second plurality of visual stimuli experienced by the user; and determining attributes of the user's visual field based on the detected discrepancy and the responses that are indicative of the visual characteristics of the second plurality of visual stimuli experienced by the user. Bradley teaches at para. [0053] optionally, color or luminance is at least one of the plurality of predefined visual characteristics of the plurality of visual stimuli that changes between each of the plurality of trials.]
in response to a user identifying the at least one image, modifying the at least the one color value and the position of the at least one image; [Bradley teaches at para. [0014] the present specification discloses a method of evaluating a user's visual field using a head-mounted device configured to be positioned on the user's head, wherein the device comprises at least one processor, a display in data communication with the at least one processor, and a nontransient memory in data communication with the at least one processor and adapted to store programmatic instructions that, when executed, execute said method, the method comprising: generating a first plurality of visual stimuli, wherein the first plurality of visual stimuli is presented in a form of a grid defined by two or more vertical lines intersecting two or more horizontal lines, wherein the grid covers a first plurality of coordinate locations in the visual field, and wherein each of the first plurality of visual stimuli has at least one of a first plurality of characteristics; causing the first plurality of visual stimuli to be displayed on the display in accordance with its first plurality of characteristics; detecting a discrepancy based on a comparison of the first plurality of characteristics with a user's response that is indicative of the visual characteristics of the first plurality of visual stimuli experienced by the user; storing the detected discrepancy as a first set of data; using the first set of data to generate a second plurality of visual stimuli, wherein each of the second plurality of visual stimuli has at least one of a second plurality of characteristics and is associated with a second plurality of coordinate locations and wherein a number of the second plurality of coordinate locations is less than a number of the first plurality of coordinate locations; causing each of the second plurality of visual stimuli to be displayed on said display in accordance with its one of the second plurality of coordinate locations and one of the second plurality of characteristics; receiving responses from the user, wherein the responses are indicative of the visual characteristics of the second plurality of visual stimuli experienced by the user; and determining attributes of the user's visual field based on the detected discrepancy and the responses that are indicative of the visual characteristics of the second plurality of visual stimuli experienced by the user. Bradley teaches at para. [0053] optionally, color or luminance is at least one of the plurality of predefined visual characteristics of the plurality of visual stimuli that changes between each of the plurality of trials.]
generating a second image with the modified color value at the modified position; [Bradley teaches at para. [0014] the present specification discloses a method of evaluating a user's visual field using a head-mounted device configured to be positioned on the user's head, wherein the device comprises at least one processor, a display in data communication with the at least one processor, and a nontransient memory in data communication with the at least one processor and adapted to store programmatic instructions that, when executed, execute said method, the method comprising: generating a first plurality of visual stimuli, wherein the first plurality of visual stimuli is presented in a form of a grid defined by two or more vertical lines intersecting two or more horizontal lines, wherein the grid covers a first plurality of coordinate locations in the visual field, and wherein each of the first plurality of visual stimuli has at least one of a first plurality of characteristics; causing the first plurality of visual stimuli to be displayed on the display in accordance with its first plurality of characteristics; detecting a discrepancy based on a comparison of the first plurality of characteristics with a user's response that is indicative of the visual characteristics of the first plurality of visual stimuli experienced by the user; storing the detected discrepancy as a first set of data; using the first set of data to generate a second plurality of visual stimuli, wherein each of the second plurality of visual stimuli has at least one of a second plurality of characteristics and is associated with a second plurality of coordinate locations and wherein a number of the second plurality of coordinate locations is less than a number of the first plurality of coordinate locations; causing each of the second plurality of visual stimuli to be displayed on said display in accordance with its one of the second plurality of coordinate locations and one of the second plurality of characteristics; receiving responses from the user, wherein the responses are indicative of the visual characteristics of the second plurality of visual stimuli experienced by the user; and determining attributes of the user's visual field based on the detected discrepancy and the responses that are indicative of the visual characteristics of the second plurality of visual stimuli experienced by the user. Bradley teaches at para. [0053] optionally, color or luminance is at least one of the plurality of predefined visual characteristics of the plurality of visual stimuli that changes between each of the plurality of trials.]
receiving an input from a user associated with the at least second image; [Bradley teaches at para. [0050] that optionally, a level of contrast is at least one of the plurality of predefined visual characteristics of the plurality of visual stimuli that changes between each of the plurality of trials. Bradley teaches at para. [0166] in some embodiments, color and color sensitivity will be set or adjusted based on a plurality of different controls. Collectively, this teaches receiving an input from a user associated with the at least second image.]
determining, based on in part on the input from the user, a contrast threshold; [Bradley teaches at para. [0050] that optionally, a level of contrast is at least one of the plurality of predefined visual characteristics of the plurality of visual stimuli that changes between each of the plurality of trials. Bradley teaches at para. [0165] that in various embodiments, the user will adjust settings using input device, which may be a touchpad, and which is electrically connected to smartphone, which is further programmed to modify the VE App according to such inputs; a Bluetooth game controller that communicates with the smartphone via Bluetooth; voice control using the microphone of the phone; or gesture control using available devices such as the NOD gesture control ring. Bradley teaches at para. [0166] in addition, there are other features of vision-assist system that can either be set up once for a user or may be user-adjustable. Bradly teaches at para. [0166] these features will include but are not limited to, adjustments to the magnitude, shape, size, or placement of minified or magnified portions of the output image, and color enhancement functions such as contrast, blur, ambient light level or edge enhancement of the entire image or portions of the image.]
determining, based in part on a visual acuity model and contrast threshold, at least one value associated with a contrast acuity; [Bradley teaches at para. [0189] the present specification provides a method of conducting vision tests such as, but not limited to: a distance visual acuity test, a contrast sensitivity test, and a contrast acuity by using the Landolt C (or Landolt ring) or any other m-AFC testing paradigm. Bradley teaches at para. [0187] in one embodiment, the present specification provides systems and methods for assessing the visual acuity of users and compensating for the limited resolution of the head mounted display, by measuring users’ contrast sensitivity (CS) and mapping the measured CS to acuity. Bradley teaches at para. [0206] at step 216, the head mounted vision device fits all of the user’s acuity levels recorded at step 214 with a predefined psychometric function to determine a final value indicative of the user’s acuity level. Bradley teaches at para. [0206] in embodiments, step 216 comprises replacing the threshold parameters in a psychometric model with the obtained acuity relations. Bradley teaches at para. [0206] in an embodiment, the head mounted vision device uses the psychometric model disclosed by Alexander et al. in the paper titled “Visual Acuity and Contrast Sensitivity for Individual Sloan Letters” published in ‘Vision Research’ Volume 37, Issue 6, March 1997, Pages 813-819 which is incorporated herein by reference, is used at step 216 by replacing the contrast sensitivity thresholds by the acuity values recorded at step 214. Bradley at para. [0211] teaches as would be apparent to persons of skill in the art the principle of acuity extrapolation through replacing the threshold parameter in typical psychometric functions with a parametrized relation among acuity thresholds will be applied to any psychometric model that relates acuity thresholds as a function of any physical property of the stimulus provided for visual testing.]
and returning to a display of a narration in progress, the narration associated with measuring visual function. [Bradley teaches at para. [0206] at step 216, the head mounted vision device fits all of the user’s acuity levels recorded at step 214 with a predefined psychometric function to determine a final value indicative of the user’s acuity level. This teaches measuring visual function. Bradely teaches at para. [0239] the video diary enables users to share difficult or impaired situation with the clinician, via a narration of what the user is seeing at that time. This teaches the narration associated with measuring visual function. Collectively, Bradley teaches returning to a display of a narration in progress, the narration associated with measuring visual function.]
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 2 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 2017/0237977 A1 (hereafter Patel) in view of WO 2021/146748 A1 (hereafter Kubota) in view of US 2009/0248442 A1 (hereafter Pacheco) in view of US 2024/0203309 A1 (hereafter Joseph) in view of WO 2018/164636 A1 (hereafter Gunasekeran).
Regarding Claim 2
Patel teaches:
[…]
accept a voice generated input from the user; [Patel teaches at para. [0048] that input devices provide a portion of a user interface. Patel teaches at para. [0048] input devices will include an alpha-numeric keypad, such as a keyboard, for inputting alpha-numeric and other information, a pointing device such as a mouse, a trackball, stylus, cursor direction keys, microphone, touch-screen, accelerometer, and other input device. Patel teaches at the end of claim 1 “…a communication interface coupled to the motor, the communication interface able to receive one or more signals that drive the motor.” Collectively, Patel teaches accepting a voice generated input from the user.]
capture an image of the user to measure a pupillary distance; [Patel teaches at Figure 8 Item 810 capture image of user face and scale indicator. Patel teaches at Figure 8 Item 820 process image to identify user pupils within image. Patel teaches at Figure 8 Item 830 calculating inter-pupillary distance (IPD) between centers of user pupils.]
measure the distance between the user and a client device; [Patel teaches at Claim 8 “…identifying the current inter-display distance in the head mount display unit; and…”. Patel teaches at Claim 13 wherein identifying the current inter-display distance is provided by a positional encoder within the head mount display unit. Because the display is mounted on the head of the user this measures the distance between the user and a client device.]
and providing, based on the measured distance, a visual indication on a display of the client device. [Patel teaches at Figure 8 Item 850 receiving user IPD measurement by computing device associated with user head mount display and at Item 860 automatically adjust inter-lens distance on head mount display based on user IPD. Note that the broadest reasonable interpretation of the claim does not restrict the adjustment in the measurement to being based on the measured distance between the user and a client device, currently it could be based on the measured IPD distance. Patel teaches at para. [0004] in an embodiment, the head mount display will provide visual content as part of a virtual reality experience for a user and automatically adjust an inter-pupillary distance for a user.]
Patel may not explicitly teach:
A method for testing user for an eye disease, the method comprising: receiving input from an administrator, based in part on a prompt for user information;
in response to the user information, determine the user qualifies to take the test;
select a screen size to display at least one image;
launch a game used for testing the eye disease;
Kubota teaches:
A method for testing user for an eye disease, the method comprising: receiving input from an administrator, based in part on a prompt for user information; [Kubota teaches at para. [0060] permissions (such as access rights and functionalities) may be role dependent. Kubota teaches at para. [0060] for example, logging in with an HCP user account allows to see different type(s) of information than logging in with a nurse/technician user account. Collectively, this is receiving input from an administrator. Kubota teaches at para. [0061] this reconfiguring comprises the allocation and storage of specific types of data (E.g., patient personal data, patient identifiers/tokens, retinal scan data, processed retinal scan data, patient medical history data, etc.) in specific databases or partitions of databases, based at least in part on the desired use of that data and whether the requester is authorized to access certain data. This teaches a request for user information. Kubota teaches at para. [0164] the system of clause 1, wherein the server is further configured to generate a user interface for an authorized user to permit access to one or more of the plurality of databases or database partitions. The user interface generated as a result of a request for user information is interpreted as a prompt. ]
Therefore, it would have been prima facie obvious to one of ordinary skill in the art of healthcare, at the time of filing, to modify the head mount display with automatic inter-pupillary distance adjustment of Patel to the database of retinal physiology derived from ophthalmic measurements performed by patients of Kubota with the motivation of improving the prior databases and systems used for such purposes can be less than ideally suited for handling confidential patient data and allowing the patient data to be analyzed with algorithms or otherwise processed to identify disease progression while protecting confidential patient information from being improperly accessed or misused (Kubota at para. [0002]).
Patel/Kubota may not explicitly teach:
in response to the user information, determine the user qualifies to take the test;
select a screen size to display at least one image;
launch a game used for testing the eye disease;
Pacheco teaches:
[…]
in response to the user information, determine the user qualifies to take the test; [Pacheco teaches at Claim 11 a method for determining a validation status of an examination request for a patient, the examination request having content including a plurality of examination data defining a clinic condition of the patient, the method comprising: receiving the examination request via a communication network; storing a plurality of predefined clinical definitions associated with at least one examination type, the at least one examination type having a match threshold including a subset definition set from a plurality of predefined clinical definitions; conducting a first stage analysis of the content by comparing the content with the plurality of predefined clinical definitions in order to determine one or more matching definitions; comparing the matching definitions against the match threshold of each of the at least one examination type for determining a validation indicator of the examination request; and transmitting the validation status of the exam request as an exam response via the communications network, the exam response including the validation indicator. The examination request is interpreted as the response to the user information.]
Therefore, it would have been prima facie obvious to one of ordinary skill in the art of healthcare, at the time of filing, to modify the head mount display with automatic inter-pupillary distance adjustment of Patel to the database of retinal physiology derived from ophthalmic measurements performed by patients of Kubota to the processing of clinical data for validation of selected clinical procedure of Pacheco with the motivation of addressing the fact that, with so many possibilities, it can be difficult for the medical practitioner to order the examination/procedure that is appropriate (e.g. valid) to the patient consultation at hand (Pacheco at para. [0002]).
Patel/Kubota/Pacheco may not explicitly teach:
select a screen size to display at least one image;
launch a game used for testing the eye disease;
Joseph teaches:
[…]
select a screen size to display at least one image; [Joseph teaches at para. [0019] in some aspects, a physical size of the display screen will be adjustable by a user (e.g., the display screen will be a rollable display, a foldable display, or the like.]
Therefore, it would have been prima facie obvious to one of ordinary skill in the art of healthcare, at the time of filing, to modify the head mount display with automatic inter-pupillary distance adjustment of Patel to the database of retinal physiology derived from ophthalmic measurements performed by patients of Kubota to the processing of clinical data for validation of selected clinical procedure of Pacheco to adapting a user interface responsive to screen size adjustment of Joseph with the motivation of improving screen size adjustment for user on displays.
Patel/Kubota/Pacheco/Joseph may not explicitly teach:
launch a game used for testing the eye disease;
Gunasekeran teaches:
launch a game used for testing the eye disease; [Gunasekeran teaches at the Abstract social remote eye screening and monitoring gamification through integration of modular assessments of visual function into the objectives of interactive games to encourage compliance to testing instructions to facilitate improved fidelity, remote frequent reassessment/trending, and/or automated interpretation. Gunasekeran teaches at Claim 15 a game configure, when a video from the game is presented to a subject who interacts with the video, to allow assessment of the visual performance of the subject.]
Therefore, it would have been prima facie obvious to one of ordinary skill in the art of healthcare, at the time of filing, to modify the head mount display with automatic inter-pupillary distance adjustment of Patel to the database of retinal physiology derived from ophthalmic measurements performed by patients of Kubota to the processing of clinical data for validation of selected clinical procedure of Pacheco to adapting a user interface responsive to screen size adjustment of Joseph to the visual performance assessment of Gunasekeran with the motivation of addressing test-retest variability, which has been described in numerous clinic-based assessments particularly in patients with eye diseases (Gunasekeran at pg. 1 line 24-25).
Claim(s) 4 is/are rejected under 35 U.S.C. 103 as being unpatentable over WO 2021/146748 A1 (hereafter Kubota) in view of Giannini(Reading a Story: Different Degrees of Learning in Different Learning Environments) in view of CN 118984671 A (hereafter Ran).
Regarding Claim 4
Kubota teaches:
[…]
displaying account information including at least two profiles associated with the first user and a second user; [Kubota teaches at Claim 30 the apparatus of claim 21, wherein the apparatus is further configured to generate user interface for an authorized user to permit access to one or more of the plurality of databases or database partitions. Kubota teaches at para. [0047] different types of personal information (e.g., demographic data, name and contact information, medical records or medical history) may be encrypted using different keys to permit access to the information to be segmented and allow different types of personal information to be provided to different users. Kubota teaches at para. [0060] permissions (such as access rights and functionalities) may be role dependent. Kubota teaches at para. [0060] for example, logging in with an HCP user account allows to see different type(s) of information than logging in with a nurse/technician user account. The nurse/technician are interpreted as a first/second user. Additionally, Kubota teaches at para. [0068] the first and second databases can be configured in many ways and in some embodiments, the PBOS system and architecture as described herein allows user-role-restricted access to patient information and provides corresponding raw- and image-data from the S3 storage to authorized users. Kubota teaches user-role will refer to a caregiver, a physician, a provider of the system for acquiring the retinal scan data, a researcher, etc.]
accessing, based on input provided by the second user, eye data associated with the first user; [Kubota teaches at para. [0079] that given the above types of data, the data will be stored in a database or databases in different ways, with the storage approach used being selected or determined by consideration of both the type of data (raw, processed, patient medical history, patient identifying data, etc.) and the purpose of the end-user in seeking access, while taking into account any relevant data privacy or regulate regulations. Kubota teaches at para. [0047] different types of personal information (e.g., demographic data, name and contact information, medical records or medical history) may be encrypted using different keys to permit access to the information to be segmented and allow different types of personal information to be provided to different users. Kubota teaches at para. [0047] in some embodiments, stored retinal thickness data will be associated with a unique identifier, with a separate data file “mapping” that identifier to a set of patient personal data. The end-user seeking access is accessing, based on input provided by the second user, patient medical history. Kubota teaches at Fig. 4A Item 406 teaches PBOS Web Portal connecting a browser to BOS Cloud Services. Kubota teaches at para. [0130] in some embodiments, patient specific parameters that are stored in the cloud & tablet, and transmitted to the HU each time it is connected will include OPD correction value (right/left), refraction error (right/left), binocular/monocular, and axial length. This teaches eye data associated with the first user. Collectively, this teaches accessing, based on input provided by the second user, eye data associated with the first user.]
displaying a portal to a provider, the portal including the eye data and at least two values associated with an axial length data associated with the first user, wherein the two values are associated with two separate points in time; [Kubota teaches at para. [0077] patient medical history will comprise one or more of tests, diagnosis, reports or evaluations by medical professionals, or medical records. This teaches two tests resulting in two axial length values (taught below) associated with two separate points in time. Kubota teaches in some embodiments, patient specific parameters that are stored in the cloud & tablet, and transmitted to the HU each time it is connected may include OPD correction value (right/left), refraction error (right/left), binocular/ monocular, and axial length. This teaches at least one axial length value. Kubota teaches at Fig. 4A Item 406 teaches PBOS Web Portal connecting a browser to BOS Cloud Services. Kubota teaches at para. [0130] in some embodiments, patient specific parameters that are stored in the cloud & tablet, and transmitted to the HU each time it is connected will include OPD correction value (right/left), refraction error (right/left), binocular/monocular, and axial length. Kubota teaches at para. [0078] the processed raw scan data, will comprise, filtered, thresholded, data evaluated by a trained machine learning (ML) model, for example. This teaches a threshold value. Kubota teaches at Claim 18 the system of claim 1, wherein the server is further configured to create alerts for one or more of a caregiver and the patient, wherein the alerts are intended to recall patients to the caregiver for further diagnosis and/or treatment. Kubota teaches at para. [0073] in some embodiments, a record of all alerts issued is stored in a third database, so that this record is immediately accessible to database administrators and caregivers without the need to search through the second database. Kubota teaches at para. [0060] the web portal provides the main user interface for the healthcare professional, the configuration provider, the IDTF staff, the nurse/technician and administrators. Collectively, Kubota teaches displaying a portal to a provider, the portal including the eye data and at least two values associated with an axial length data associated with the first user, wherein the two values are associated with two separate points in time.]
Kubota may not explicitly teach:
A method of managing eye data, the method comprising: generating a story displayed on a client device to monitor the eye data associated with the first user;
and based on a difference between the at least two values associated with the axial length data and a threshold tolerance level, sending a link to at least the first user.
Giannini teaches:
A method of managing eye data, the method comprising: generating a story displayed on a client device to monitor the eye data associated with the first user; [Giannini teaches at the Abstract specifically, children were asked to learn the same material shown in three different learning environments: reading illustrated books (TB); interacting with the same text displayed on a PC monitor and enriched with interactive activities (PC-IA); reading the same text on a PC monitor but not enriched with interactive narratives (PC-NoIA). This teaches generating a story displayed on a client device to monitor the eye data associated with the first user. Giannini teaches at the Abstract, in contrast, PC-IA and PC-NoIA produced higher scores for visuo-spatial memory, enhancing memory for spatial relations, positions and colors with respect to TB. The memories for spatial relations, positions and colors with respect to TB are interpreted as eye data.]
Therefore, it would have been prima facie obvious to one of ordinary skill in the art of healthcare, at the time of filing, to modify the database of retinal physiology derived from ophthalmic measurements perform by patients of Kubota to the reading a story: different degrees of learning in different learning environments of Giannini with the motivation of adopting techniques that accordingly, learning environments are most effective when they elicit effortful cognitive processing by guiding learners in actively constructing meaningful relationships rather than encouraging passive recording and storage of information (Giannini at pg. 2).
Kubota/Giannini may not explicitly teach:
and based on a difference between the at least two values associated with the axial length data and a threshold tolerance level, sending a link to at least the first user.
Ran teaches the following noted feature:
and based on a difference between the at least two values associated with the axial length data and a threshold tolerance level, sending a […] to at least the first user. [Ran teaches at pg. 4, Summary …and(g) calculating the difference between the initial axial length Ale and the new true axial length Ale to obtain a change in axial length. Ran teaches at pg. 6 the method will include determining whether the average right eye deviation and the average left eye deviation differ by more than a threshold, and providing an alert if the average right eye deviation and the average left eye deviation differ by more than a threshold.]
Kubota teaches the following noted feature:
…link… [Kubota teaches at para. [0087] in some embodiments, a caregiver may elect to create a separate database or partition for each individual patient, the design and contents of which will be customized by the caregiver through a Wi-Fi link or via the cellular network. This teaches at Wi-Fi link.]
It would have been prima facie obvious to one of ordinary skill in the art at the time of the invention was made to combine the noted features of Ran with teaching of Kubota since the combination of the two references is merely simple substitution of one known element for another producing a predictable result (KSR rationale B). Since each individual element and its function are shown in the prior art, albeit shown in separate references, the difference between the claimed subject matter and the prior art rests not on any individual element or function but in the very combination itself—that is, in the substitution of the link of the secondary reference(s) for the alert means of the primary reference. Thus, the simple substitution of one known element for another producing a predictable result renders the claim obvious.
Claim(s) 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over WO 2021/146748 A1 (hereafter Kubota) in view of WO 2018/164636 A1 (hereafter Gunasekeran).
Regarding Claim 6
Kubota teaches:
A method for measuring eye data, the method comprising: receiving user input including login data or social media information; [Kubota teaches at para. [0060] permissions (such as access rights and functionalities) may be role dependent. Kubota teaches at para. [0060] for example, logging in with an HCP user account allows to see different type(s) of information than logging in with a nurse/technician user account. Collectively, th