DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 10/23/25 has been entered.
Notice of Amendment
In response to the amendment(s) filed on 10/23/25, amended claim(s) 29, 33, 36, and 40, and canceled claim(s) 41-42 is/are acknowledged. The following new and/or reiterated ground(s) of rejection is/are set forth:
Claim Objections
Claim 29 is objected to because of the following informalities: “determining” (line 14) appears that it should be “determine.”
Claim 29 is objected to because of the following informalities: “users” (line 15) appears that it should be “user’s.”
Claim 36 is objected to because of the following informalities: “adjust” (line 8) appears that it should be “adjusting.”
Claim 36 is objected to because of the following informalities: “users” (line 15) appears that it should be “user’s.”
Claim 36 is objected to because of the following informalities: “record” (line 10) appears that it should be “recording.”
Appropriate correction is required.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “one or more sensors configured to detect a user’s responses,” in claim 29, which corresponds to “head tracking, eye tracking, voice recognition, heart rate, skin capacitance, EKG, brain activity sensors such as EEG, hand and body tracking, geolocation, retinal cameras, balance tracking, temperature, and pupil tracking and any other types of sensors,” (see para [0112] of Applicant’s specification as originally filed); and “one or more sensors,” as recited in claim 36, which corresponds to “head tracking, eye tracking, voice recognition, heart rate, skin capacitance, EKG, brain activity sensors such as EEG, hand and body tracking, geolocation, retinal cameras, balance tracking, temperature, and pupil tracking and any other types of sensors,” (see para [0112] of Applicant’s specification as originally filed).
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claim(s) 29-33 and 36-40 is/are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
For claim 29, the claim language “determine a threshold estimate based on a value of an observable property of the virtual object being displayed and the head movement tracking data” does not appear to be described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for pre-AIA the inventor(s), at the time the application was filed, had possession of the claimed invention. A claim may lack written description when the specification does not disclose the computer and the algorithm (i.e., the necessary steps and/or flowcharts) that perform the claimed function in sufficient detail such that one of ordinary skill in the art can reasonably conclude that the inventor invented the claimed subject matter. See MPEP 2161.01(I). Here, the claim recites the function of determining a threshold estimate based on a value of an observable property of the virtual object being displayed and the user input data, but the specification never discloses the necessary steps and/or flowcharts of how this occurs. It is not enough that a skilled artisan could devise a way to accomplish the function because this is not relevant to the issue of whether the inventor has shown possession of the claimed invention. See MPEP 2161.01(I). Therefore, adequate disclosure is needed.
For claim 29, the claim language “ii) upon reaching a point of perceptual change, record the value of the observable property as the determined threshold estimate for the observable property” appears to be new matter. The examiner could not find this exact claim language in the specification and, although the same terms used in the claims do not need to be used in the written description, the examiner could not find any corollaries either. The examiner respectfully requests Applicant’s assistance in determining where support may be found or have the subject matter deleted from the claim(s).
For claim 31, the claim language “wherein the threshold estimate indicates a value of the observable property at which a change in the user's perception of the virtual object occurs” appears to be new matter. The examiner could not find this exact claim language in the specification and, although the same terms used in the claims do not need to be used in the written description, the examiner could not find any corollaries either. The examiner respectfully requests Applicant’s assistance in determining where support may be found or have the subject matter deleted from the claim(s).
For claim 32, the claim language “wherein the threshold estimate indicates visual capabilities of the user and is used to refine the virtual reality environment to progressively improve vision of the user based on an interaction of the user with the virtual object” appears to be new matter. The examiner could not find this exact claim language in the specification and, although the same terms used in the claims do not need to be used in the written description, the examiner could not find any corollaries either. The examiner respectfully requests Applicant’s assistance in determining where support may be found or have the subject matter deleted from the claim(s).
For claim 33, the claim language “wherein the threshold estimate indicates a property value where a user begins to use a suppressed eye” appears to be new matter. The examiner could not find this exact claim language in the specification and, although the same terms used in the claims do not need to be used in the written description, the examiner could not find any corollaries either. The examiner respectfully requests Applicant’s assistance in determining where support may be found or have the subject matter deleted from the claim(s).
For claim 36, the claim language “determining a threshold estimate based on a value of an observable property of the virtual object being displayed and the head movement tracking data” does not appear to be described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for pre-AIA the inventor(s), at the time the application was filed, had possession of the claimed invention. A claim may lack written description when the specification does not disclose the computer and the algorithm (i.e., the necessary steps and/or flowcharts) that perform the claimed function in sufficient detail such that one of ordinary skill in the art can reasonably conclude that the inventor invented the claimed subject matter. See MPEP 2161.01(I). Here, the claim recites the function of determining a threshold estimate based on a value of an observable property of the virtual object being displayed and the user input data, but the specification never discloses the necessary steps and/or flowcharts of how this occurs. It is not enough that a skilled artisan could devise a way to accomplish the function because this is not relevant to the issue of whether the inventor has shown possession of the claimed invention. See MPEP 2161.01(I). Therefore, adequate disclosure is needed.
For claim 36, the claim language “ii) upon reaching a point of perceptual change, recording the value of the observable property as the determined threshold estimate for the observable property” appears to be new matter. The examiner could not find this exact claim language in the specification and, although the same terms used in the claims do not need to be used in the written description, the examiner could not find any corollaries either. The examiner respectfully requests Applicant’s assistance in determining where support may be found or have the subject matter deleted from the claim(s).
For claim 38, the claim language “wherein the threshold estimate indicates a value of the observable property at which a change in the user's perception of the virtual object occurs” appears to be new matter. The examiner could not find this exact claim language in the specification and, although the same terms used in the claims do not need to be used in the written description, the examiner could not find any corollaries either. The examiner respectfully requests Applicant’s assistance in determining where support may be found or have the subject matter deleted from the claim(s).
For claim 39, the claim language “wherein the threshold estimate indicates the user's visual capabilities and is used to refine the virtual reality environment to progressively improve the user's vision based on the user's interaction with the virtual object” appears to be new matter. The examiner could not find this exact claim language in the specification and, although the same terms used in the claims do not need to be used in the written description, the examiner could not find any corollaries either. The examiner respectfully requests Applicant’s assistance in determining where support may be found or have the subject matter deleted from the claim(s).
For claim 40, the claim language “wherein the threshold estimate indicates a property value where a user begins to use a suppressed eye” appears to be new matter. The examiner could not find this exact claim language in the specification and, although the same terms used in the claims do not need to be used in the written description, the examiner could not find any corollaries either. The examiner respectfully requests Applicant’s assistance in determining where support may be found or have the subject matter deleted from the claim(s).
Dependent claim(s) 30-33 and 37-40 fail to cure the deficiencies of independent claim(s) 29 and 36, thus claim(s) 29-33 and 36-40 is/are rejected under 35 U.S.C. 112(a).
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim(s) 29-33 and 36-40 is/are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
For claim 29, the claim language “one or more sensors configured to detect a user’s responses” is ambiguous. This language is interpreted under 35 U.S.C. 112(f) and the corresponding structure is found at para [0112] of Applicant’s specification as originally filed. Para [0112] provides a laundry list of different species of sensors that may be included in the claimed genus. However, some of the species don’t have clear structure and therefore the metes and bounds of the genus cannot be ascertained. For example, “any other types of sensors” does not have clear structure as to what species of sensors a skilled artisan would understand these to be. The claim is examined as meaning any type/structure of sensor that is capable of performing the function of detecting a user’s responses.
For claim 36, the claim language “one or more sensors” is ambiguous. This language is interpreted under 35 U.S.C. 112(f) and the corresponding structure is found at para [0112] of Applicant’s specification as originally filed. Para [0112] provides a laundry list of different species of sensors that may be included in the claimed genus. However, some of the species don’t have clear structure and therefore the metes and bounds of the genus cannot be ascertained. For example, “any other types of sensors” does not have clear structure as to what species of sensors a skilled artisan would understand these to be. The claim is examined as meaning any type/structure of sensor that is capable of performing the function of detecting a user’s responses.
Dependent claim(s) 30-33 and 37-40 fail to cure the ambiguity of independent claim(s) 29 and 36, thus claim(s) 29-33 and 36-40 is/are rejected under 35 U.S.C. 112(b).
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim(s) 29-33 and 36-40 is/are rejected under 35 U.S.C. 101 because the claimed invention, considering all claim elements both individually and in combination as a whole, do not amount to significantly more than a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea).
Claim 29 is a claim to a process, machine, manufacture, or composition of matter and therefore meets one of the categorical limitations of 35 U.S.C. 101. However, Claim 29 meets the first prong of the step 2A analysis because it is directed to a/an abstract idea, as evidenced by the claim language of “display, in a virtual reality environment … a virtual object including an observable property,” “receiving user input data … wherein the user input data comprises head movement tracking data,” “determine a threshold estimate based on a value of an observable property of the virtual object being displayed and the head movement tracking data, wherein determine the threshold estimate further comprises: i) adjust the value of the observable property based on user’s input data to measure perceptual changes in user’s perception of the virtual object, ii) upon reaching a point of perceptual change, record the value of the observable property as the determined threshold estimate for the observable property,” “modify, upon a determination that the threshold estimate is not within a desired confidence interval, the value of the observable property of the virtual object displayed,” “display, in the virtual reality environment … the virtual object according to the modified observable property,” and “conducting iterative performances of a) - e) until the threshold estimate is within the desired confidence interval.” This claim language, under the broadest, reasonable interpretation, encompasses subject matter that may be performed mentally or with pen and paper. Specifically, many of these steps may be thought about and then expressed by displaying them by writing down or drawing them on paper. The claim language also meets prong 2 of the step 2A analysis because the above-recited claim language does not integrate the abstract idea into a practical application. That is, there appears to be no tangible improvement in a technology, effect of a particular treatment or prophylaxis, a particular machine or manufacture that is integrated, or transformation/reduction of a particular article to a different state or thing as a result of this claimed subject matter. As a result, step 2A is satisfied and the second step, step 2B, must be considered.
With regard to the second step, the claim does not appear to recite additional elements that amount to significantly more. The additional elements include “a computing device comprising at least one data processor and at least one computer-readable storage medium storing computer-executable instructions,” “a head-mountable display device configured to communicate with the computing device and having a virtual reality display configured to render a virtual reality environment,” and “one or more sensors configured to detect a user’s responses.” However, these elements are not “significantly more” because they are well-known, routine, and/or conventional as evidenced by para [0024] of Applicant’s specification as originally filed, which identifies that the “head-mountable VR device can be any type of a VR device, which can include a low-cost device …. The virtual environment delivered to the user by the VR device…,” para [0056] of Applicant’s specification, which states “a head mountable virtual reality (VR) device that creates a virtual reality environment for the user wearing the VR device such that a display, or screen, is positioned over the user’s eyes. The VR device includes at least one data processor, a visual interface, and memory storing instructions for execution by the at least one data processor,” and para [0028] of U.S. Patent Application Publication No. 2014/0192326 to Kiderman et al. Therefore, these elements do not add significantly more and thus the claim as a whole does not amount to significantly more than a judicial exception.
Additionally, the ordered combination of elements do not add anything significantly more to the claimed subject matter. Specifically, the ordered combination of elements do not have any function that is not already supplied by each element individually. That is, the whole is not greater than the sum of its parts.
Claim 36 is a claim to a process, machine, manufacture, or composition of matter and therefore meets one of the categorical limitations of 35 U.S.C. 101. However, Claim 36 meets the first prong of the step 2A analysis because it is directed to a/an abstract idea, as evidenced by the claim language of “displaying, in a virtual reality environment … a virtual object including an observable property,” “receiving user input data … wherein the user input data comprises head movement tracking data,” “determining a threshold estimate based on a value of an observable property of the virtual object being displayed and the head movement tracking data, wherein determining the threshold estimate further comprises: i) adjusting the value of the observable property based on user’s input data to measure perceptual changes in user’s perception of the virtual object, ii) upon reaching a point of perceptual change, record the value of the observable property as the determined threshold estimate for the observable property,” “modifying, upon a determination that the threshold estimate is not within a desired confidence interval, the value of the observable property of the virtual object displayed,” “displaying, in the virtual reality environment … the virtual object according to the modified observable property,” and “conducting iterative performances of a) - e) until the threshold estimate is within the desired confidence interval.” This claim language, under the broadest, reasonable interpretation, encompasses subject matter that may be performed mentally or with pen and paper. Specifically, many of these steps may be thought about and then expressed by displaying them by writing down or drawing them on paper. The claim language also meets prong 2 of the step 2A analysis because the above-recited claim language does not integrate the abstract idea into a practical application. That is, there appears to be no tangible improvement in a technology, effect of a particular treatment or prophylaxis, a particular machine or manufacture that is integrated, or transformation/reduction of a particular article to a different state or thing as a result of this claimed subject matter. As a result, step 2A is satisfied and the second step, step 2B, must be considered.
With regard to the second step, the claim does not appear to recite additional elements that amount to significantly more. The additional elements include “a head-mountable display device,” and “one or more sensors.” However, these elements are not “significantly more” because they are well-known, routine, and/or conventional as evidenced by para [0024] of Applicant’s specification as originally filed, which identifies that the “head-mountable VR device can be any type of a VR device, which can include a low-cost device …. The virtual environment delivered to the user by the VR device…,” para [0056] of Applicant’s specification, which states “a head mountable virtual reality (VR) device that creates a virtual reality environment for the user wearing the VR device such that a display, or screen, is positioned over the user’s eyes. The VR device includes at least one data processor, a visual interface, and memory storing instructions for execution by the at least one data processor,” and para [0028] of U.S. Patent Application Publication No. 2014/0192326 to Kiderman et al. Therefore, these elements do not add significantly more and thus the claim as a whole does not amount to significantly more than a judicial exception.
Additionally, the ordered combination of elements do not add anything significantly more to the claimed subject matter. Specifically, the ordered combination of elements do not have any function that is not already supplied by each element individually. That is, the whole is not greater than the sum of its parts.
In view of the above, independent claims 29 and 36 fail to recite patent-eligible subject matter under 35 U.S.C. 101. Dependent claim(s) 30-33 and 37-40 fail to cure the deficiencies of independent claims 29 and 36 by merely reciting additional abstract ideas, further limitations on abstract ideas already recited, and/or additional elements that fail to recite significantly more. Thus, claim(s) 29-33 and 36-40 is/are rejected under 35 U.S.C. 101.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim(s) 29-33 and 36-40 is/are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication No. 2013/0308099 to Stack in view of U.S. Patent Application Publication No. 2006/0087618 to Smart et al. (hereinafter “Smart”), U.S. Patent Application Publication No. 2010/0073469 to Fateh, and U.S. Patent Application Publication No. 2014/0347390 to Poulos et al. (hereinafter “Poulos”).
For claim 29, Stack discloses a system for vision assessment and correction (Abstract), comprising:
a computing device (Examiner’s Note: made up of the elements it comprises) comprising at least one data processor (30) (Fig. 1B) (para [0073]);
a head-mountable display device (“headset,” para [0068]) configured to communicate with the computing device (as can be seen in Fig. 1B) and having a virtual reality display (20) (Fig. 1B) (para [0071]) configured to (Examiner’s Note: functional language, i.e., capable of) render a virtual reality environment (para [0046], [0050], and/or [0055]); and
one or more sensors (26) (Fig. 1B) (para [0072]-[0073]) configured to (Examiner’s Note: functional language, i.e., capable of) detect a user’s responses (para [0072]);
wherein the at least one data processor is configured to:
display, in a virtual reality environment and on the head-mountable display device, a virtual object (40) (Fig. 2) (para [0075]) having an observable property (as can be seen in Figs. 3-4 and 6-7) (104) (Fig. 8);
receive user input data generated by the one or more sensors (116) (Fig. 8);
determine a threshold estimate based on a value of an observable property of the virtual object being displayed and the user input data (130, 146, 150, 152 and/or 160) (Fig. 8) (para [0104] and/or [0109]-[0113]).
Stack does not expressly disclose wherein the user input data comprises head movement tracking data.
However, Smart teaches head movement tracking data (para [0082] and [0189]).
It would have been obvious to a skilled artisan to modify Stack wherein the user input data comprises head movement tracking data, in view of the teachings of Smart, for the obvious advantage of being able to account for a user controlling a squint by moving of their head (see para [0189] of Smart). It should be noted that this modification does not purport to change that the eye tracking data in Stack be dependent on head tracking data because Stack recognizes that it is advantageous for its gaze tracking data to be free from head movement so that artifacts are not introduced into the data. Instead, the head tracking data can be collected separately, and in addition to, the gaze tracking data, which allows for the gaze tracking data to be free from artifacts. By doing so, two separate data streams can be analyzed to give insight into the behavior of the user that will allow for a more accurate diagnosis to be rendered.
Stack and Smart do not expressly disclose at least one computer-readable storage medium storing computer-executable instructions; wherein the at least one data processor is configured to execute the computer-executable instructions; wherein determine the threshold estimate further comprises: adjust the value of the observable property based on user’s input data to measure perceptual changes in user’s perception of the virtual object, upon reaching a point of perceptual change, record the value of the observable property as the determined threshold estimate for the observable property; modifying, upon a determination that the threshold estimate is not within a desired confidence interval, the value of the observable property of the virtual object displayed on the head-mountable display device; displaying, in the virtual reality environment and on the head-mountable display device, the virtual object according to the modified observable property; and conducting iterative performances of a) - e) until the threshold estimate is within the desired confidence interval.
However, Fateh teaches at least one computer-readable storage medium storing computer-executable instructions (para [0058]-[0059]); wherein the at least one data processor is configured to execute the computer-executable instructions (para [0058]-[0059]); wherein determine the threshold estimate further comprises: adjust the value of the observable property based on user’s input data to measure perceptual changes in user’s perception of the virtual object (see Fig. 8) (also see para [0093] and [0096]-[0097]), upon reaching a point of perceptual change, record the value of the observable property as the determined threshold estimate for the observable property (see Fig. 8) (also see para [0093] and [0096]-[0097], which is teaches that re-evaluation may be triggered manually instead of periodically); modifying, upon a determination that the threshold estimate is not within a desired parameter, the value of the observable property of the virtual object displayed on the head-mountable display device (i.e., displayed on 104, see Fig. 1) (808) (Fig. 8); displaying, in the virtual reality environment (i.e., see Fig. 3 for example) and on the head-mountable display device, the virtual object according to the modified observable property (810) (Fig. 8); and conducting iterative performances of a) - e) until the threshold estimate is within the desired parameter (812) (Fig. 8).
Additionally, Poulos teaches that a parameter can be a confidence interval (para [0036]-[0038]).
It would have been obvious to a skilled artisan to modify Stack to include at least one computer-readable storage medium storing computer-executable instructions; wherein the at least one data processor is configured to execute the computer-executable instructions; wherein determine the threshold estimate further comprises: adjust the value of the observable property based on user’s input data to measure perceptual changes in user’s perception of the virtual object, upon reaching a point of perceptual change, record the value of the observable property as the determined threshold estimate for the observable property; modifying, upon a determination that the threshold estimate is not within a desired confidence interval, the value of the observable property of the virtual object displayed on the head-mountable display device; displaying, in the virtual reality environment and on the head-mountable display device, the virtual object according to the modified observable property; and conducting iterative performances of a) - e) until the threshold estimate is within the desired confidence interval, in view of the teachings of Fateh and Poulos, for the obvious advantage of taking into account improvement in eye function over time, especially if the patient is undergoing therapy to evaluate the effectiveness of therapy. Additionally, a computer-readable storage medium would be obvious because it a suitable means by which to store software that may be executed by the processor. Moreover, the use of confidence intervals allows for non-binary analysis that allows for more granularity in the decision-making process.
For claim 34, Stack, as modified, further discloses wherein determine the threshold estimate further comprises: adjust the value of the observable property based on users input data to measure perceptual changes in user’s perception of the virtual object (see Fig. 8 of Fateh) (also see para [0093] and [0096]-[0097]); and upon reaching a point of perceptual change, record the value of the observable property as the determined threshold estimate for the observable property (see Fig. 8 of Fateh) (also see para [0093] and [0096]-[0097], which is teaches that re-evaluation may be triggered manually instead of periodically).
For claim 41, Stack, as modified, further discloses wherein determining the threshold estimate further comprises: adjusting the value of the observable property based on users input data to measure perceptual changes in user’s perception of the virtual object see Fig. 8 of Fateh) (also see para [0093] and [0096]-[0097]); and upon reaching a point of perceptual change, recording the value of the observable property as the determined threshold estimate for the observable property (see Fig. 8 of Fateh) (also see para [0093] and [0096]-[0097], which is teaches that re-evaluation may be triggered manually instead of periodically).
For claim 30, Stack discloses wherein the one or more sensors comprise a motion recognition sensor (para [0072]).
For claim 31, Stack, as modified, further discloses wherein the threshold estimate indicates a value of the observable property at which a change in the user's perception of the virtual object occurs (para [0093] and [0096] of Fateh).
For claim 32, Stack, as modified, further discloses wherein the threshold estimate indicates visual capabilities of the user and is used to refine the virtual reality environment to progressively improve vision of the user based on an interaction of the user with the virtual object (see Fig. 8 of Fateh) (also see para [0093] and [0096]-[0097]).
For claim 33, Stack, as modified, further discloses wherein the threshold estimate indicates a property value where a user begins to use a suppressed eye (Examiner’s Note: functional language/intended use, i.e., capable of) (see Fig. 8 of Fateh) (also see para [0093] and [0096]-[0097]).
For claim 36, Stack discloses a method for vision assessment and correction (Abstract), comprising:
displaying, in a virtual reality environment and on a head-mountable display device (“headset,” para [0068]), a virtual object (40) (Fig. 2) (para [0075]) having an observable property (as can be seen in Figs. 3-4 and 6-7) (104) (Fig. 8);
receiving user input data (para [0072]) generated by one or more sensors (26) (Fig. 1B) (para [0072]-[0073]);
determining a threshold estimate based on a value of an observable property of the virtual object being displayed and the user input data (130, 146, 150, 152 and/or 160) (Fig. 8) (para [0104] and/or [0109]-[0113]).
Stack does not expressly disclose wherein the user input data comprises head movement tracking data.
However, Smart teaches head movement tracking data (para [0082] and [0189]).
It would have been obvious to a skilled artisan to modify Stack wherein the user input data comprises head movement tracking data, in view of the teachings of Smart, for the obvious advantage of being able to account for a user controlling a squint by moving of their head (see para [0189] of Smart). It should be noted that this modification does not purport to change that the eye tracking data in Stack be dependent on head tracking data because Stack recognizes that it is advantageous for its gaze tracking data to be free from head movement so that artifacts are not introduced into the data. Instead, the head tracking data can be collected separately, and in addition to, the gaze tracking data, which allows for the gaze tracking data to be free from artifacts. By doing so, two separate data streams can be analyzed to give insight into the behavior of the user that will allow for a more accurate diagnosis to be rendered.
Stack and Smart do not expressly disclose wherein determining the threshold estimate further comprises: i) adjusting the value of the observable property based on user’s input data to measure perceptual changes in user’s perception of the virtual object, ii) upon reaching a point of perceptual change, recording the value of the observable property as the determined threshold estimate for the observable property; modifying, upon a determination that the threshold estimate is not within a desired confidence interval, the value of the observable property of the virtual object displayed on the head-mountable display device; displaying, in the virtual reality environment and on the head-mountable display device, the virtual object according to the modified observable property; and conducting iterative performances of a) - e) until the threshold estimate is within the desired confidence interval.
However, Fateh teaches wherein determine the threshold estimate further comprises: adjusting the value of the observable property based on user’s input data to measure perceptual changes in user’s perception of the virtual object (see Fig. 8) (also see para [0093] and [0096]-[0097]), upon reaching a point of perceptual change, recording the value of the observable property as the determined threshold estimate for the observable property (see Fig. 8) (also see para [0093] and [0096]-[0097], which is teaches that re-evaluation may be triggered manually instead of periodically); modifying, upon a determination that the threshold estimate is not within a desired parameter, the value of the observable property of the virtual object displayed on the head-mountable display device (i.e., displayed on 104, see Fig. 1) (808) (Fig. 8); displaying, in the virtual reality environment (i.e., see Fig. 3 for example) and on the head-mountable display device, the virtual object according to the modified observable property (810) (Fig. 8); and conducting iterative performances of a) - e) until the threshold estimate is within the desired parameter (812) (Fig. 8).
Additionally, Poulos teaches that a parameter can be a confidence interval (para [0036]-[0038]).
It would have been obvious to a skilled artisan to modify Stack wherein determining the threshold estimate further comprises: i) adjusting the value of the observable property based on user’s input data to measure perceptual changes in user’s perception of the virtual object, ii) upon reaching a point of perceptual change, recording the value of the observable property as the determined threshold estimate for the observable property; modifying, upon a determination that the threshold estimate is not within a desired confidence interval, the value of the observable property of the virtual object displayed on the head-mountable display device; displaying, in the virtual reality environment and on the head-mountable display device, the virtual object according to the modified observable property; and conducting iterative performances of a) - e) until the threshold estimate is within the desired confidence interval, in view of the teachings of Fateh and Poulos, for the obvious advantage of taking into account improvement in eye function over time, especially if the patient is undergoing therapy to evaluate the effectiveness of therapy. Additionally, a computer-readable storage medium would be obvious because it a suitable means by which to store software that may be executed by the processor. Moreover, the use of confidence intervals allows for non-binary analysis that allows for more granularity in the decision-making process.
For claim 37, Stack discloses wherein the one or more sensors comprise a motion recognition sensor (para [0072]).
For claim 38, Stack, as modified, further discloses wherein the threshold estimate indicates a value of the observable property at which a change in the user's perception of the virtual object occurs (para [0093] and [0096] of Fateh).
For claim 39, Stack, as modified, further discloses wherein the threshold estimate indicates visual capabilities of the user and is used to refine the virtual reality environment to progressively improve vision of the user based on an interaction of the user with the virtual object (see Fig. 8 of Fateh) (also see para [0093] and [0096]-[0097]).
For claim 40, Stack, as modified, further discloses wherein the threshold estimate indicates a property value where a user begins to use a suppressed eye (Examiner’s Note: functional language/intended use, i.e., capable of) (see Fig. 8 of Fateh) (also see para [0093] and [0096]-[0097]).
Response to Arguments
Applicant’s arguments filed 10/23/25 have been fully considered. They will be treated in the order they were presented.
With respect to the 112(a) rejections, the rejections against claims 29 and 36 don’t have the rationale of the rejection addressed in the response. The rationale behind the written description rejection is that there is no algorithm to “determine a threshold estimate….” The response just cites a passage from the written description that parrots the claim language. However, no further detail is given as to disclose the computer and the algorithm (i.e., the necessary steps and/or flowcharts) that perform the claimed function in sufficient detail such that one of ordinary skill in the art can reasonably conclude that the inventor invented the claimed subject matter. Applicant is reminded that it is not enough that a skilled artisan could devise a way to accomplish the function because this is not relevant to the issue of whether the inventor has shown possession of the claimed invention. See MPEP 2161.01(I). Therefore, adequate disclosure is needed. The rest of the rejections are not addressed in the response and therefore those rejections are maintained.
With respect to the 112(b) rejections, the rejections are maintained as there were no arguments traversing the rejections in the response.
With respect to the 101, Applicant’s arguments will be treated in the order they were presented. With respect to the first argument, an improvement in mathematics is not an improvement in technology. Here, no “technology” is being improved, but the estimation of the threshold is being improved. But improving an estimation is an improved of an intangible mathematical process because an estimation is a mathematical construct. With respect to the second argument, the calibration loop is capable of being performed in the human mind because it doesn’t result in a tangible product. Instead, what is being produced from this calibration loop is the adjustment of a value and recording of a value, both mental concepts and both things that the human mind can perform.
With respect to the 103 rejection(s), Applicant’s argues that Fateh does not teach adjusting the observable property, but then gives examples of the observable property to include brightness and contrast. Para [0093] of Fateh explicitly uses the terms “brightness” and “contrast.” Applicant then argues that the visual data in Fateh is not “user’s input data” and gives the example of a button press or subjective reporting. However, the claim language is not so limited. The claim term “user’s input data” is broad enough under the broadest, reasonable interpretation to include visual data (such as the data in Fateh) because that data is (1) from the user and (2) input into flowchart shown in Fig. 8 to adjust the image parameters. The response also argues that the claim language is directed towards identifying perceptual change in the user’s perceived experience. Fateh’s teaches that the user may have trouble detecting edges (i.e., their perceived experience, the reference does not say that the edges are not present in the image, it is the user’s difficulty to detect them) and that changes as the image is modified. Finally, the response argues that value is recorded. But it is important to read the entire claim language that recites “record the value of the observable property as the determined threshold estimate” (see claim 29). That is, the value of the observable property is being correlated to the threshold estimate. Fateh discloses at para [0093] that “[t]he value of the enhancement parameter is typically proportional to the degree of deficiency. Similarly, if a user eye is un-affected and functional, a reduction in image parameter is generated, in proportion to the visual strength.”
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DANIEL LEE CERIONI whose telephone number is (313) 446-4818. The examiner can normally be reached M - F 8:00 AM - 5:00 PM PT.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jennifer Robertson can be reached on (571) 272-5001. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DANIEL L CERIONI/Primary Examiner, Art Unit 3791