Prosecution Insights
Last updated: April 19, 2026
Application No. 18/548,857

VISION-BASED COGNITIVE IMPAIRMENT TESTING DEVICE, SYSTEM AND METHOD

Non-Final OA §102§103
Filed
Sep 01, 2023
Examiner
HENSON, DEVIN B
Art Unit
3791
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
Evolution Optiks Limited
OA Round
1 (Non-Final)
65%
Grant Probability
Favorable
1-2
OA Rounds
3y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 65% — above average
65%
Career Allow Rate
505 granted / 777 resolved
-5.0% vs TC avg
Strong +44% interview lift
Without
With
+43.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 11m
Avg Prosecution
43 currently pending
Career history
820
Total Applications
across all art units

Statute-Specific Performance

§101
4.9%
-35.1% vs TC avg
§103
44.4%
+4.4% vs TC avg
§102
23.9%
-16.1% vs TC avg
§112
23.6%
-16.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 777 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Notice of Amendment In response to the amendment filed on 3/25/2024, amended claims 6, 8, 10-11, 13-14, and 16-18 are acknowledged. Claims 1-19 are currently pending. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. No claim limitation has been interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Objections Claim 16 is objected to because of the following informalities: Claim 16 recites the limitation “respective selectable or tunable lenses tunable to dynamically optically force the left and right eye to accommodate such that said designated visual digital test content is simultaneously perceived by the left and right eye, respectively, to be at said common virtual position relative” in lines 2-5, which it appears should instead recite “respective selectable or tunable lenses tunable to dynamically optically force the left and right eye to accommodate such that said designated visual digital test content is simultaneously perceived by the left and right eye, respectively, to be at said common virtual position”. Appropriate correction is required. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-3, 5-8, and 10-19 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Macnamara et al. (US Publication No. 2018/0136486 A1) (cited by Applicant). Regarding claim 1, Macnamara et al. discloses a vision-based testing device for digitally implementing a vision-based test for a user using both their left and right eye simultaneously, the device comprising: left and right display portions comprising respective pixel arrays (see [0060] – “In some other embodiments, the display device may include multiple light emitting modules 27, and each eye may have at least one light emitting module configured to direct light to that eye” and [0083] – “Two integral imaging displays 62 can be provided, each generally being used to display processed light field image data to one of the user's eyes. Specifically, a left integral imaging display 62 can be provided and aligned to display light field image data to the user's left eye. Similarly, a right integral imaging display 62 can be provided and aligned to display light field image data the user's right eye”); corresponding light field shaping element (LFSE) arrays (712) of light field shaping elements respectively disposed at a distance from said display portions so to at least partially govern respective left and right light fields projected on the user's left and right eye, respectively, wherein perception of said respective left and right light fields is at least partially constrained to the left and right eye, respectively (see [0086] – “As shown in FIG. 7, the integral imaging display 62 includes a two-dimensional array of micro-lenses 712 and a two-dimensional array of light sources 714” and [0238] – “At block 1308, the light field processor system alters the field of view of one or both eyes. This can include, for example, partially or fully computationally occluding the eye(s) (e.g., by blacking out all or a portion of the image data to be presented to each eye). In some embodiments, the light field processor system may present different images to each eye, sometimes in different locations within the field of view, to either strengthen a weaker eye or promote proper vergence for image fusion within the brain”); and a digital data processor (70) operable on pixel data for designated visual digital test content, to simultaneously render said designated visual digital test content via said respective pixel arrays in accordance with the vision-based test to be respectively projected toward respective user pupil locations in accordance with respective light field view zones generated via said respective pixel arrays and corresponding LFSE arrays to be simultaneously perceived by the left and right eye, respectively, to be at a common virtual position relative to the left and right eye so to invoke a natural binocular eye vergence response corresponding to said common virtual position (see [0069] – “In both the multi-plane focus systems and variable plane focus systems, the patient-worn health system may employ eye tracking to determine the vergence of the user's eyes, to determine the user's current focus, and to project the virtual image at the determined focus”, [0075] – “The eye tracking module may be configured to determine the vergence of the user's eyes in order to determine what the appropriate normal accommodation would be (through the direct relationship between vergence and accommodation) for the projection of one or more virtual images, and may also be configured to track one or more eye-related parameters (e.g., position of the eye, eye movements, eye patterns, etc.)”, [0180] – “Likewise, in various embodiments, the light field processor system may be configured to determine a focal depth at which the eyes are focused or accommodated. In some embodiments, eye-tracking system may be used to triangulate the user's convergence point and adjust the focus of the images to be presented to the user accordingly. For example, the eye-tracking system may determine a direction along which each eye is viewing (e.g., a line extending from each eye) and determine a convergence angle where the directions intersect. The convergence point may be determined from the determined angle of convergence. In some embodiments, the eye-tracking system may be included as part of the biofeedback system. As described above, in various embodiments, the light field processor system may utilize cameras 24 paired with light sources 26 (e.g., an infrared light source and infrared camera) to track the position of each eye, which can be operatively coupled to the local processing module 70. The local processing module 70 may include software that, when executed, may be configured to determine the convergence point of the eyes. From this determination, the light field processor system may also execute logic device to determine a focus location or depth based on the orientation or direction of the user's gaze”, [0224] – “As an example, if the convergence is offset in an angular fashion, a compensating prism correction may be computationally applied to bring the convergence point of both eyes together. The compensating prism correction may be computationally applied by the light field processor to the collected light field image data”, and [0226] – “For example, a computational compensating prism correction can be applied as an optical correction for the wearer (e.g., to compensate for convergence deficiencies in one or both eyes of the wearer). This correction can be applied to collected light field image data to account for the deficiencies of the wearer so that the wearer can achieve or approximate binocular single vision even where the wearer suffers from strabismus and/or amblyopia”). Regarding claim 2, Macnamara et al. discloses said common virtual position comprises a virtual depth position relative to said display portions (see [0180] – “Likewise, in various embodiments, the light field processor system may be configured to determine a focal depth at which the eyes are focused or accommodated. In some embodiments, eye-tracking system may be used to triangulate the user's convergence point and adjust the focus of the images to be presented to the user accordingly. For example, the eye-tracking system may determine a direction along which each eye is viewing (e.g., a line extending from each eye) and determine a convergence angle where the directions intersect. The convergence point may be determined from the determined angle of convergence. In some embodiments, the eye-tracking system may be included as part of the biofeedback system. As described above, in various embodiments, the light field processor system may utilize cameras 24 paired with light sources 26 (e.g., an infrared light source and infrared camera) to track the position of each eye, which can be operatively coupled to the local processing module 70. The local processing module 70 may include software that, when executed, may be configured to determine the convergence point of the eyes. From this determination, the light field processor system may also execute logic device to determine a focus location or depth based on the orientation or direction of the user's gaze”). Regarding claim 3, Macnamara et al. discloses said left and right display portions comprise respective displays (see [0083] – “Two integral imaging displays 62 can be provided, each generally being used to display processed light field image data to one of the user's eyes. Specifically, a left integral imaging display 62 can be provided and aligned to display light field image data to the user's left eye. Similarly, a right integral imaging display 62 can be provided and aligned to display light field image data the user's right eye”), and wherein said corresponding LFSE arrays comprise respective microlens arrays (see [0086] – “As shown in FIG. 7, the integral imaging display 62 includes a two-dimensional array of micro-lenses 712 and a two-dimensional array of light sources 714”). Regarding claim 5, Macnamara et al. discloses said LFSE arrays comprise a microlens array (see [0086] – “As shown in FIG. 7, the integral imaging display 62 includes a two-dimensional array of micro-lenses 712 and a two-dimensional array of light sources 714”). Regarding claim 6, Macnamara et al. discloses said common virtual position is a variable three-dimensional (3D) position that varies during execution of the vision-based test to dynamically adjust a perceived depth location of said designated visual digital test content and thereby invoke a variable binocular eye vergence response thereto (see [0063] – “Regarding the projection of light 38 into the eyes 20 of the user, in some embodiments, the cameras 24 may be utilized to measure where the user's eyes 20 are looking (e.g., where the lines of sight of the two eyes intersect), which information may be used to determine the state of focus or accommodation of the eyes 20. A 3-dimensional surface of all points focused by the eyes is called the “horopter.” The focal distance may take on a finite number of depths, or may be infinitely varying. Light projected physically or virtually from the vergence distance appears to be focused to the subject eye 20, while light in front of or behind the vergence distance is blurred”, [0067] – “Unlike prior 3D display approaches that force the user to focus where the images are being projected, in some embodiments, the user-worn health system is configured to automatically vary the focus of projected virtual content to allow for a more comfortable viewing of one or more images presented to the user. For example, if the user's eyes have a current focus of 1 m, the image may be projected to coincide with the user's focus. Or, if the user shifts focus to 3 m, the image is projected to coincide with the new focus”, and [0294] – “In some embodiments, the wearable augmented or virtual reality device 1500 can be configured to use the display platform 1502 to project images of varying size to the wearer or images from varying depth planes to the wearer. In some implementations, the image can include letters or shapes of varying sizes and/or projected from varying depth planes. In various implementations, the size and/or depth planes of the letters and/or shapes projected to the wearer can be varied during the eye exam”). Regarding claim 7, Macnamara et al. discloses the vision-based test comprises a vergence test (see [0069] – “In both the multi-plane focus systems and variable plane focus systems, the patient-worn health system may employ eye tracking to determine the vergence of the user's eyes, to determine the user's current focus, and to project the virtual image at the determined focus” and [0312] – “In certain implementations, the light field processor system can also be configured to determine changes in the wearer's eyes (e.g., accommodation, vergence, pupil size, etc.) when the diopter value is changed. This objective test data can be combined with the subjective response of the wearer to determine if the change in diopter value results in a change in visual quality for the wearer”). Regarding claim 8, Macnamara et al. discloses said common virtual position is a variable two-dimensional (2D) location on a plane parallel to said display portions that varies during execution of the test to dynamically adjust a common perceived lateral location of said designated visual digital test content (see [0068] – “To achieve this, various embodiments of the patient-worn health system are configured to project virtual images at varying focal distances, through one or more variable focus elements (VFEs). In one or more embodiments, 3D perception may be achieved through a multi-plane focus system that projects images at fixed focal planes away from the user”, [0274] – “A system (e.g., the system 600) may administer visual field testing by determining the ability of a subject to detect an image at various locations within the visual field. The system may project light into the eye of a wearer using, for example, the integral imaging display 62 to form an image in the eye”, and [0275] – “The test may be repeated in various quadrants or locations of the periphery of the wearer's visual field, such as at the left, right, top, and/or bottom of the visual field”). Regarding claim 10, Macnamara et al. discloses said designated visual digital test content comprises at least one of an optotype, a symbol, an image, a spot or a flash (see [0294] – “The wearable light field processor system 1500 can include a data store that includes one or more stored images suitable for conducting an eye exam or for determining an optical prescription for a wearer. The stored image may be letters, numbers, symbols etc., such as used in eye charts” and [0301] – “The image can include elements configured to aid in determining visual acuity of the wearer, wherein the visual acuity elements comprise, for example and without limitation, icons, symbols, letters, shapes, or the like. The visual acuity elements of the image can have a variety of sizes within the image and/or the size of the visual acuity elements can be varied by the light field processor system”). Regarding claim 11, Macnamara et al. discloses said digital data processor is operable to adjust rendering of said designated visual digital test content via said corresponding LFSE arrays so to accommodate for a visual aberration in at least one of the left or right eye (see [0082] – “In some embodiments, the numerical light field image data can be processed so as to at least partially correct for a user's myopia, hyperopia, astigmatism, presbyopia, strabismus, amblyopia, macular degeneration, higher-order refractive errors, chromatic aberration, or micro defects. Other types of corrections are also possible” and [0462] – “In some embodiments, in the re-rendering step, the processor (e.g., the light field processor 70) may be configured to selectively modify properties of the image that will be displayed to the wearer. For example, the processor may be configured to selectively alter portions of the image based on a distribution of healthy and unhealthy cells in a retina of the wearer so that those portions are projected to healthy retinal cells, while portions of the image projected to unhealthy retinal cells may be reduced, minimized, magnified, brightened, or otherwise altered in magnification, intensity, hue, saturation, spatial frequency, or other quality. Similarly, any desired portion of the image may be modified in magnification, intensity, hue, saturation, spatial frequency, or any other quality as required to mitigate and/or compensate for any known ophthalmic condition of the wearer. The wavefront of the image may also be modified and/or reshaped so as to mitigate focus-related conditions in some embodiments”). Regarding claim 12, Macnamara et al. discloses said visual aberration comprises distinct respective visual aberrations for the left and right eye (see [0082] – “In some embodiments, the numerical light field image data can be processed so as to at least partially correct for a user's myopia, hyperopia, astigmatism, presbyopia, strabismus, amblyopia, macular degeneration, higher-order refractive errors, chromatic aberration, or micro defects. Other types of corrections are also possible” and [0313] – “It should be appreciated that, in some implementations, the same process may be repeated for both eyes (e.g., both eyes can be treated together applying the same correction to each eye or individually applying different correction to the left and right eyes)”). Regarding claim 13, Macnamara et al. discloses a pupil or eye tracking interface for tracking a motion of the left and right eye during execution of the vision-based test (see [0075] – “The eye tracking module may be configured to determine the vergence of the user's eyes in order to determine what the appropriate normal accommodation would be (through the direct relationship between vergence and accommodation) for the projection of one or more virtual images, and may also be configured to track one or more eye-related parameters (e.g., position of the eye, eye movements, eye patterns, etc.)” and [0180] – “Likewise, in various embodiments, the light field processor system may be configured to determine a focal depth at which the eyes are focused or accommodated. In some embodiments, eye-tracking system may be used to triangulate the user's convergence point and adjust the focus of the images to be presented to the user accordingly. For example, the eye-tracking system may determine a direction along which each eye is viewing (e.g., a line extending from each eye) and determine a convergence angle where the directions intersect. The convergence point may be determined from the determined angle of convergence. In some embodiments, the eye-tracking system may be included as part of the biofeedback system. As described above, in various embodiments, the light field processor system may utilize cameras 24 paired with light sources 26 (e.g., an infrared light source and infrared camera) to track the position of each eye, which can be operatively coupled to the local processing module 70. The local processing module 70 may include software that, when executed, may be configured to determine the convergence point of the eyes. From this determination, the light field processor system may also execute logic device to determine a focus location or depth based on the orientation or direction of the user's gaze”). Regarding claim 14, Macnamara et al. discloses said digital data processor is operable on said pixel data for each of the left and right display portions, respectively, to digitally: project a given ray trace between each given pixel and a given pupil location given a direction of a light field emanated by said given pixel based on a given LFSE intersected thereby, to intersect said designated visual digital test content at said common virtual position or at its respective corresponding retinal image projections thereof (see [0445] – “In one or more embodiments, the light projecting source comprises an integral imaging display with LED light sources configured to project images into different portions of the user's eyes. The system may comprise other types of displays that can be configured to selectively project light onto different portions of the retina. This technology may be leveraged to selectively project pixels of an image to the healthy retinal cells, and reduce, minimize, or alter the nature of light projected to the damaged areas. For example, pixels projected to the anomaly may be magnified or made brighter” and [0446] – “The light field processor system may modify the wearer's view of light from the world. The system may detect the light entering the device in real time or near real time, and may modify portions of the light or project additional light to correct for the wearer's macular deficiency. For example, the system may use outward-facing cameras, such as the integral imaging camera 16, to image the world. The system may then project an image of the world to the wearer. The projected image data may be altered such that pixels may be selectively projected to healthy retinal cells, while pixels projected to anomalies may be reduced, minimized, magnified, brightened, or otherwise altered in magnification, intensity, hue, saturation, spatial frequency, or other quality”); and for each said given pixel, associate a given adjusted image pixel value designated as a function of said intersection (see [0445] – “In one or more embodiments, the light projecting source comprises an integral imaging display with LED light sources configured to project images into different portions of the user's eyes. The system may comprise other types of displays that can be configured to selectively project light onto different portions of the retina. This technology may be leveraged to selectively project pixels of an image to the healthy retinal cells, and reduce, minimize, or alter the nature of light projected to the damaged areas. For example, pixels projected to the anomaly may be magnified or made brighter” and [0446] – “The light field processor system may modify the wearer's view of light from the world. The system may detect the light entering the device in real time or near real time, and may modify portions of the light or project additional light to correct for the wearer's macular deficiency. For example, the system may use outward-facing cameras, such as the integral imaging camera 16, to image the world. The system may then project an image of the world to the wearer. The projected image data may be altered such that pixels may be selectively projected to healthy retinal cells, while pixels projected to anomalies may be reduced, minimized, magnified, brightened, or otherwise altered in magnification, intensity, hue, saturation, spatial frequency, or other quality”). Regarding claim 15, Macnamara et al. discloses a selectable or tunable lens to extend a dynamic range of said perceived depth location (see [0068] – “To achieve this, various embodiments of the patient-worn health system are configured to project virtual images at varying focal distances, through one or more variable focus elements (VFEs). In one or more embodiments, 3D perception may be achieved through a multi-plane focus system that projects images at fixed focal planes away from the user” and [0174] – “In various embodiments, the light field processor system 600 can computationally function as a variable focus lens. As described above, for example, in correcting for myopia, hyperopia, or astigmatism, the light field processor system 600 may apply computational transforms on the captured light field image data to add spherical wavefront curvature which can be dynamically altered by varying the computational transforms which are applied”). Regarding claim 16, Macnamara et al. discloses respective selectable or tunable lenses tunable to dynamically optically force the left and right eye to accommodate such that said designated visual digital test content is simultaneously perceived by the left and right eye, respectively, to be at said common virtual position relative (see [0068] – “To achieve this, various embodiments of the patient-worn health system are configured to project virtual images at varying focal distances, through one or more variable focus elements (VFEs). In one or more embodiments, 3D perception may be achieved through a multi-plane focus system that projects images at fixed focal planes away from the user”, [0174] – “In various embodiments, the light field processor system 600 can computationally function as a variable focus lens. As described above, for example, in correcting for myopia, hyperopia, or astigmatism, the light field processor system 600 may apply computational transforms on the captured light field image data to add spherical wavefront curvature which can be dynamically altered by varying the computational transforms which are applied”, and [0356] – “The apparent distance of the image or object can be varied so as to induce accommodation”). Regarding claim 17, Macnamara et al. discloses said digital data processor is further operable on pixel data for said designated visual digital test content to further adjust perception thereof in dynamically optically forcing the left and right eye to accommodate such that said designated visual digital test content is simultaneously perceived by the left and right eye, respectively, to be at said common virtual position (see [0356] – “The apparent distance of the image or object can be varied so as to induce accommodation” and [0452] – “If it is determined that the user has one or more anomalies, the light field processor system may be configured to project a modified image to the user's eye such that the majority of the image is viewed through healthy peripheral retinal cells, and any pixels projected to the anomalies are adjusted. It should be appreciated that the image to be projected may need to be modified through predetermined algorithms such that the user views the image through the healthy cells, but does not notice a significant change in the image itself”). Regarding claim 18, Macnamara et al. discloses said digital data processor is further operable on pixel data for said designated visual digital test content to accommodate for a reduced user visual acuity such that said designated visual digital test content is simultaneously perceived by the left and right eye, respectively, to be at said common virtual position relative to the left and right eye without an intervening corrective lens adapted for said reduced visual acuity (see [0226] – “For example, a computational compensating prism correction can be applied as an optical correction for the wearer (e.g., to compensate for convergence deficiencies in one or both eyes of the wearer). This correction can be applied to collected light field image data to account for the deficiencies of the wearer so that the wearer can achieve or approximate binocular single vision even where the wearer suffers from strabismus and/or amblyopia”, [0309] – “In some embodiments, the light field processor system can compare the measured accommodation, vergence, and/or pupil size to an expected accommodation, vergence, and/or pupil size. If one or more of the measured characteristics are within a targeted range of the one or more expected characteristics, then the light field processor system can determine that the wearer is comfortably or correctly seeing the image (e.g., the wearer is seeing the image with expected, adequate, or normal visual acuity). If one or more of the measured characteristics are outside of the targeted range of the one or more expected characteristics, then the light field processor system can determine that the wearer is not comfortably or correctly seeing the image (e.g., the wearer is seeing the image with impaired visual acuity)”, and [0480] – “The higher resolution image information can then be displayed to the user. By digitally increasing the resolution of the image information, the wearer of the system can experience a perceived increase in visual acuity”). Regarding claim 19, Macnamara et al. discloses said reduced user visual acuity comprises distinct respective reduced visual acuities for each of the right and left eye (see [0226] – “For example, a computational compensating prism correction can be applied as an optical correction for the wearer (e.g., to compensate for convergence deficiencies in one or both eyes of the wearer). This correction can be applied to collected light field image data to account for the deficiencies of the wearer so that the wearer can achieve or approximate binocular single vision even where the wearer suffers from strabismus and/or amblyopia”, [0309] – “In some embodiments, the light field processor system can compare the measured accommodation, vergence, and/or pupil size to an expected accommodation, vergence, and/or pupil size. If one or more of the measured characteristics are within a targeted range of the one or more expected characteristics, then the light field processor system can determine that the wearer is comfortably or correctly seeing the image (e.g., the wearer is seeing the image with expected, adequate, or normal visual acuity). If one or more of the measured characteristics are outside of the targeted range of the one or more expected characteristics, then the light field processor system can determine that the wearer is not comfortably or correctly seeing the image (e.g., the wearer is seeing the image with impaired visual acuity)”, and [0480] – “The higher resolution image information can then be displayed to the user. By digitally increasing the resolution of the image information, the wearer of the system can experience a perceived increase in visual acuity”). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 4 is/are rejected under 35 U.S.C. 103 as being unpatentable over Macnamara et al., further in view of Travers et al. (US Publication No. 2017/0296421 A1). Regarding claim 4, it is noted Macnamara et al. does not specifically teach said perception of said respective left and right light fields is at least partially constrained to the left and right eye via a physical barrier. However, Travers et al. teaches said perception of said respective left and right light fields is at least partially constrained to the left and right eye via a physical barrier (160) (see [0045] – “Since in various embodiments of the invention the apparatus 100 is utilized to display visual content dichoptically to the user, the apparatus may also prevent the left eye of the user from viewing the display screen 115 and the user's right eye from viewing the display screen 110. For example, the apparatus 100 may include a barrier 160 between the lenses 120, 125 that separates the fields of view of the user's eyes”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the device of Macnamara et al. to include said perception of said respective left and right light fields is at least partially constrained to the left and right eye via a physical barrier, as disclosed in Travers et al., so as to separate the fields of view of the user’s eyes (see Travers et al.: [0045]). Claim(s) 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Macnamara et al., further in view of Samec et al. (US Publication No. 2017/0365101 A1) (cited by Applicant). Regarding claim 9, it is noted Macnamara et al. does not specifically teach the vision-based test comprises at least one of a saccades test or a smooth pursuit test. However, Samec et al. teaches the vision-based test comprises at least one of a saccades test (see [0706] – “The display system may be configured to detect impairments in smooth saccadic movement in some embodiments. To test the smoothness of the user's saccadic movement, the display system may be configured to present a stimulus to the user in FIG. 11 at block 1710) or a smooth pursuit test (see [0716] – “The display system may be configured to detect impairments in the smooth pursuit of the user's gaze along the vertical and/or horizontal directions. To test the smooth pursuit of the user's gaze, the display system may be configured to present a stimulus to the user as shown at block 1710 of FIG. 11”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the device of Macnamara et al. to include the vision-based test comprises at least one of a saccades test or a smooth pursuit test, as disclosed in Samec et al., so as to help determine if a user has a sign of dementia, Parkinson’s disease, traumatic brain injury, cortical blindness, etc. (see Samec et al.: [0525]). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to DEVIN B HENSON whose telephone number is (571)270-5340. The examiner can normally be reached M-F 7 AM ET - 5 PM ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Robert (Tse) Chen can be reached at (571) 272-3672. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DEVIN B HENSON/ Primary Examiner, Art Unit 3791
Read full office action

Prosecution Timeline

Sep 01, 2023
Application Filed
Oct 02, 2025
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12594005
GRASPING-RESPONSE EVALUATION SYSTEM
2y 5m to grant Granted Apr 07, 2026
Patent 12582300
Steerable instrument comprising a detachable part
2y 5m to grant Granted Mar 24, 2026
Patent 12582347
APPARATUS, METHODS, AND SYSTEMS FOR MEASURING CERVICAL DILATION USING STRUCTURED LIGHT
2y 5m to grant Granted Mar 24, 2026
Patent 12569145
System and Method for Determining Body Core Temperature
2y 5m to grant Granted Mar 10, 2026
Patent 12551142
NONINVASIVE DEVICE FOR MONITOR, DETECTION, AND DIAGNOSIS OF DISEASES AND HUMAN PERFORMANCE
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
65%
Grant Probability
99%
With Interview (+43.5%)
3y 11m
Median Time to Grant
Low
PTA Risk
Based on 777 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month