Notice of Pre-AIA or AIA Status
1.The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
2. Applicant’s arguments with respect to claim(s) 1 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Regard to claim 16, applicant argue it depend from the amended claim 1, and should be allowable for the reasons for which amended claim 1 is allowable over the cited prior art. The examiner disagrees. Claim 16 is a written as an independent claim, and Applicant did not provide any arguments regarding claim 16 and the rejection of claim 16 is maintained.
Claim Objections
3. Claims 7, 8, 12 and 16 objected to because of the following informalities:
Claim 7, line 4-5, line 10, line 14-15, “performing an acuity test” should be – performing the acuity test—
Claim 8, line 2, “performing the vision test” should be – performing a vision test—
Claim 8, line 3, “a mobile device” should be – the mobile device--
Claim 12, line 3-4, “determining performing an acuity test” should be – performing an acuity test—
Claim 16, line 5, “their voice” should be – the user’s voice--
Appropriate correction is required.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
4. Claim(s) 1, 2, 8, 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Limon (US20210401282A1) in view of MADDALENA (WO-0200105-A1).
With regard to claim 1, Limon teaches a method for assessing vision of a user, the method comprising: guiding the user to a suitable distance ([0007] display at least one image at a first distance) from a display (110, Fig. 1) of using one or more outputs from a mobile device ( 106, Fig. 1,[0046] the computing device is a mobile computing device such as a smartphone device ), the mobile device operatively connected to the display (110, 106 connected through network, Fig. 1); and, separately, for each eye of the user ( see, Fig. 4, [0105] at the end of first eye flow, start the second eye flow), performing an acuity test by: presenting, by the display ([0047] display device can display images to test subject), at least one diagram directly to the eye of the user via the display ( Fig. 16. See [0028] Fig. 16 shows an example group of steps for a visual acuity test); and enabling the user to select, via interaction with the mobile device ( e.g., 106, Fig. 1 [0124] input device that connected to 106 convert user inputs into computer readable signal) , at least one input per diagram ( see user’s action at Fig. 16), wherein the at least one input per diagram ( see user sees diagram in Fig. 16) corresponds to an acuity measurement; [[see Fig. 16]] recording an acuity test response ( user’s action, Fig. 16, [0035] the results of test can be provided to a third party, therefore it is recorded) ; separately( see, Fig. 4, [0105] at the end of first eye flow, start the second eye flow), for each eye of the user, performing an axis test (axis refinement test, Fig. 14) by: presenting, by the display (110, Fig. 1), at least one diagram ( user sees diagram, Fig. 14) directly to the eye of the user via the display ( e.g., 110, Fig. 1); presenting, by the display, at least one question ( user hears, Fig. 14) related to the diagram; and enabling the user to select, via interaction with the mobile device, at least one response per question ( user’s action corresponding to each user see’s diagram) ( Note that [0085] states measure cylinder axis if astigmatism exists, SNR image is shown, [0087] states the user can provide feedback by grading the preceded sharpness and darkness of line as response to the question ( see question in Fig. 9A, users hear during SNR Axis measurement, and User’s action as answer); recording an axis test response ( see Fig. 14, Fig. 9A); and transmitting vision assessment results comprising the acuity test response and axis test response to an optical professional or organization of optical professionals for determining if analysis or conversion can be made from the vision assessment results to a clinically usable format. (The results of the test process can, in some cases, be provided to a third party (e.g., an eyecare professional) to interpret the results and take appropriate clinical action.[0035] as discussed above, the test includes the acuity test in Fig. 16 and axis test Fig. 14, Fig. 9A)
Limon does not explicitly teach display of a computer, and transmitting vision assessment raw results comprising the acuity test response and axis test response to an optical professional or organization of optical professionals.
However Maddalena teaches display of a computer ( see Fig. 19, 1914 of 1901), transmitting vision assessment raw results comprising the acuity test response and axis test response to an optical professional or organization of optical professionals (Page 8, line 19-28, * raw ** data are then stored in the clinical database 226. Test data from the clinical database 226 are passed to a diagnostic module 228, in which the data may either be analyzed automatically …or analyzed by a legally registered optometrist, Fig. 9 discussed about visual acuity test, and Limon discuss about record the acuity test and axis test above).
It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to have modified Limon with display of a computer and transmitting vision assessment raw results comprising the acuity test response and axis test response to an optical professional or organization of optical professionals taught by Maddalena. The purpose of use a conventional general-purpose computer system is to use a simple and existing computer because their reliability, cost-effectiveness, widespread availability and efficient for everyday tasks. The purpose of using the raw data to convert to a clinic-use format can improve patient care through more accurate and complete records, more accurate diagnose, and enhance clinical decision making.
With regard to claim 2, the combination of Limon and MADDALENA teaches all the limitations of claim 1, Limon further teaches using the one or more outputs from the mobile device comprises using audio prompts ( see Fig. 16, User hears, which is audio prompts).
With regard to claim 8, the combination of Limon and MADDALENA teaches all the limitations of claim 1, Limon further teaches providing the user with instructions for performing the vision test via output from a mobile device ( see claim 11, computer processor (106) control the test, [0046] 106 can be a smart phone, [0070] user download app to the smartphone)), the mobile device ( e.g., 106, Fig. 1) operatively connected to the computer ( e.g., 110, Fig. 1 and .
With regard to claim 15, the combination of Limon and MADDALENA teaches all the limitations of claim 1, Limon further teach the optical professional or organization of optical professionals ([0035] The results of the test process can, in some cases, be provided to a third party (e.g., an eyecare professional) )is different from an operator of the computer ([0035] The test process can be self-administered, meaning the test subject can be the user operating and implementing the test process. In some embodiments, the test subject can therefore be alone or receive no assistance from other people while completing the test process.) or a provider of the method. ), because the examination is self-conducted, the operator/provider of the method is the user themself, and therefore, different from the third party ( e.g., eyecare professional)
Claim(s) 3 and 4 is/are rejected under 35 U.S.C. 103 as being unpatentable over the combination Limon (US20210401282A1) and MADDALENA (WO-0200105-A1) in view of Carrafa ‘987 (US 20160353987 A1).
Regarding claim 3, the combination Limon and Maddalena teaches the method of claim 1. But not teach wherein guiding the user to a specified distance from the display comprises causing the mobile device to execute an application, which uses a camera of the mobile device to assist the user in determining whether the user is at the suitable distance from the display.
Carrafa ‘987 (US 20160353987 A1) teaches wherein guiding the user ([0058]: Step 250 of the process 200 includes guiding a user) to a specified distance ([0058]: a specific distance from the monitor 170) from the display (Fig. 7A: monitor 170) comprises causing the mobile device (Fig. 1: mobile device 120) to execute an application ([0033]: custom software) ([0033]: distance from a target is determined by using a camera capable of running custom software… such as may be provided by a smartphone or other mobile or portable device, such as a tablet or laptop computer), which uses a camera (Fig. 1: camera 145) of the mobile device (Fig. 1: mobile device 120) to assist the user in determining whether the user is at the suitable distance from the display ([0057]: Step 240 of the process 200 includes tracking the distance from the device 120 to the monitor 170 of computer 130 as the device 120 is moved away from or toward the monitor 170 with the camera 145 of device 120 trained on the monitor 170).
It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the claim 1 with the application and camera taught by Carrafa ‘987 (US 20160353987 A1) to track user distance in real time ([0057]: As the distance changes, the portion of the viewfinder taken up by the monitor 170 will also change. This data may be used along with the initial distance determination to track the current distance of the camera 145 from the monitor 170 on a near real time basis).
Regarding claim 4, the combination Limon, Maddalena, and Carrafa ‘987 (US 20160353987 A1) teaches the method of claim 3.
Limon does not teach wherein the application is a web-based application that does not require the user to download a separate application.
Carrafa ‘987 (US 20160353987 A1) teaches wherein the application ([0047]: an application or program running on the computer 130) is a web-based application ([0047]: provided through a web-page) that does not require the user to download a separate application ([0038]: the mobile device can be linked to a web page or application running on the computer such that the mobile device can be used to control the application on the computer. This can be helpful for guiding the user through the calibration process and also for guiding the user through an eye exam), i.e., the mobile device controls the web-based application for guiding the user through an eye exam.
It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the claim 3 with the web-based application taught by Carrafa ‘987 (US 20160353987 A1) to allow the application to run on most consumer devices ([0033]: the methods provided do not require specific information about the camera and can be run on most consumer mobile phones or any portable computing device that includes a camera). Note that Maddalena teaches the application is a web-based application (page 6, line 10-15, The computer is typically connected to a network such as the World Wide Web (WWW) to allow access to a further computer at which the diagnostic evaluation is undertaken.) that does not require the user to download a separate application
6. Claim(s) 5 is/are rejected under 35 U.S.C. 103 as being unpatentable over Limon (US20210401282A1) , MADDALENA (WO-0200105-A1) and Carrafa ‘987 (US 20160353987 A1) in view of Carrafa ‘416 (US 20170042416 A1).
Regarding claim 5, the combination over Limon (US20210401282A1) and MADDALENA (WO-0200105-A1) and Carrafa ‘987 (US 20160353987 A1) teaches The method of claim 3.
but does not teach wherein assisting the user in determining whether the user is at the suitable distance from the display comprises: displaying a shape on the display; capturing an image of the displayed shape using the camera of the mobile device; and guiding the user to move away from or toward the display until the captured image of the displayed shape is of a size which corresponds to the suitable distance from the display.
Carrafa ‘416 (US 20170042416 A1) teaches wherein assisting the user in determining whether the user is at the suitable distance from the display (Fig. 5C: monitor screen 170) comprises: displaying a shape (Fig. 5C: target 830) on the display (Fig. 5C: monitor screen 170); capturing an image ([0042]: Step 230 of the process 200 includes obtaining an image of a display screen 170) of the displayed shape (Fig. 5C: target 830) using the camera (Fig. 1: camera 145) of the mobile device (Fig. 1: mobile device 120);
It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to have claim 3 with the displayed shape to make the target easier for the camera to detect ([0043]: The target 830 may be any shape or pattern that aids in distinguishing itself from the surrounding environment so that the target area may be more easily detected. In FIG. 5C the target 830 is defined by a black box with a white interior).
Carrafa ‘416 (US 20170042416 A1) does not teach guiding the user to move away from or toward the display until the captured image of the displayed shape is of a size which corresponds to the suitable distance from the display.
Carrafa ‘987 (US 20160353987 A1) teaches guiding the user ([0058]: Step 250 of the process 200 includes guiding a user) to move away from or toward the display (Fig. 7A: monitor 170) ([0058]: Guiding may further comprise providing instructions to the user to continue to move to or from the monitor 170) until the captured image ([0057]: the monitor 170 may be maintained in the camera viewfinder) of the displayed shape is of a size which corresponds to the suitable distance from the display ([0062]: Step 260 of the process 200 includes providing an indication to a user once the designated distance has been reached. The indication may be a display on the monitor 170 or an output device 155 of the mobile device 120 of any general type that would allow a user to know that he or she can stop moving in relation to the monitor).
It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to have modified Carrafa ‘416 (US 20170042416 A1) with the user guidance taught by Carrafa ‘987 (US 20160353987 A1) in order to make it easier for a user to self-administer the test ([0032]: the provided guidance may facilitate a user to undergo an eye exam without the need for technical or trained personnel to administer the test).
7. Claim(s) 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over Limon (US20210401282A1) and MADDALENA (WO-0200105-A1) Carrafa ‘987 (US 20160353987 A1), and Carrafa ‘416 (US 20170042416 A1) in view of Jensen (US 20190175011 A1).
Regarding claim 6, Limon (US20210401282A1) , MADDALENA (WO-0200105-A1) Carrafa ‘987 (US 20160353987 A1), and Carrafa ‘416 (US 20170042416 A1) teaches The method of claim 5,but does not teach wherein guiding the user to move away from or toward the display comprises: displaying a window on the mobile device; guiding the user toward the display if the captured image of the displayed shape is smaller than the window; and guiding the user away from the display if the captured image of the displayed shape is larger than the window.
Jensen (US 20190175011 A1) teaches wherein guiding the user to move away from or toward the display comprises: displaying a window (Fig. 12: silhouette 446) on the mobile device (Fig. 1: consumer device 102); guiding the user toward ([0036]: Audio instructions… “too far.”) the display (Fig. 1: display screen 114) if the captured image ([0036]: detected image of the user) of the displayed shape is smaller than the window ([0050]: the computer processor 122 a can carry out dynamic facial detection and compare the head size and position of the imagery of the user 104 to the size and position of the silhouette 446); and guiding the user away ([0036]: Audio instructions… “back up”) from the display (Fig. 1: display screen 114) if the captured image ([0036]: detected image of the user) of the displayed shape is larger than the window ([0050]: the computer processor 122 a can carry out dynamic facial detection and compare the head size and position of the imagery of the user 104 to the size and position of the silhouette 446).
It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to have claim 5, with the display window taught by Jensen (US 20190175011 A1) to provide visual feedback to the user ([0050]: As noted at step 306, while the real-time imagery of the user 104 is being acquired and displayed on the display screen 114 along with the silhouette 446, the consumer device instructs the user to position the consumer device 102 relative to the user 104, thereby providing real-time visual feedback to the user for proper positioning of the computerized consumer device 102).
8. Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Limon (US20210401282A1) , MADDALENA (WO-0200105-A1) and Carrafa ‘987 (US 20160353987 A1) in view of Bartlett (US 20120050685 A1).
Regarding claim 7, the combination Limon (US20210401282A1) , MADDALENA (WO-0200105-A1), and Carrafa ‘987 (US 20160353987 A1) teaches The method of claim 3.
The combination Limon (US20210401282A1) , MADDALENA (WO-0200105-A1) and Carrafa ‘987 (US 20160353987 A1) does not teach wherein the method comprises enforcing the user to maintain the suitable distance from the display during the step of performing an acuity test, wherein enforcing the user to maintain the suitable distance from the display comprises: performing the step of guiding the user to the suitable distance from the display a plurality of times during the step of performing an acuity test; and if, for any instance of the performance of the step of guiding the user to the suitable distance from the display, the user is not located at the suitable distance, then discontinuing the step of performing an acuity test until the user is located at the suitable distance.
Bartlett (US 20120050685 A1) teaches wherein the method comprises enforcing the user to maintain the suitable distance from the display ([0032]: ensure the user undertakes testing at an appropriate viewing distance) during the step of performing an acuity test ([0031]: the handheld device 100 is used to provide a vision test suitable for detected eye diseases and to monitor their present status or severity. As some vision tests are sensitive to the distance from the user under test to the display device being observed, it may be important that the handheld device 100 operate in a fashion that ensures that each test sequence is undertaken at an appropriate viewing distance), [0039] mentioned about an acuity test ([0022]: Eye tests, which can be performed by an eye test algorithm, are for example:… Landolt Acuity Test (for determining visual acuity, performing refraction test)),
wherein enforcing the user to maintain the suitable distance from the display ([0032]: ensure the user undertakes testing at an appropriate viewing distance) comprises: performing the step of guiding the user to the suitable distance from the display ([0033]: the handheld device 100 may signal to the user through a visible, audible, or other way that he or she is either too close or too far away) a plurality of times ([0031]: the camera 112 may take a picture or even continuously monitor a video sequence of the user and make appropriate measurements from the image to ensure that the user is at an acceptable distance from the handheld device 100) ([0033]: The handheld device 100 may not continue operation of the testing until the user has positioned himself or herself at an appropriate distance so that accurate and reliable testing is substantially ensured), because camera 112 continuously monitors a video sequence of the user, and because the device discontinues testing until the user repositions themselves, that the step of signally the user through a visible, audible, or other way that he or she is either too close or too far away may occur more than once, during the step of performing an acuity test ([0031]: vision test[0039] mentioned about an acuity test);
and if, for any instance of the performance of the step of guiding the user to the suitable distance from the display, the user is not located at the suitable distance ([0033]: If the handheld device 100 detects that the user is not at a suitable distance for the test that is operating), then discontinuing the step of performing an acuity test until the user is located at the suitable distance. ([0033]: The handheld device 100 may not continue operation of the testing until the user has positioned himself or herself at an appropriate distance so that accurate and reliable testing is substantially ensured).
It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to have modified claim 6 with the user enforcement taught by Bartlett (US 20120050685 A1) to ensure accurate and reliable testing ([0033]: The handheld device 100 may not continue operation of the testing until the user has positioned himself or herself at an appropriate distance so that accurate and reliable testing is substantially ensured).
9. Claim(s) 9, 10 , 12 and 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over the combination Limon (US20210401282A1) , MADDALENA (WO-0200105-A1) in view of Bartlett (US 20120050685 A1).
Regarding claim 9, the combination Limon (US20210401282A1), MADDALENA (WO-0200105-A1) teaches the method of claim 8.
but does not teach wherein determining whether the user is correcting following the instructions using the camera of the mobile device
Bartlett (US 20120050685 A1) teaches wherein determining whether the user is correcting following the instructions ([0033]: ensure that the test is being undertaken correctly) using the camera (Fig. 1a: camera 112) of the mobile device (Fig. 1a: handheld device 100) ([0033]: the camera 112 can be used to monitor the user).
It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to have modified claim 8 with the camera-based monitoring taught by Bartlett (US 20120050685 A1) to ensure the test is undertaken correctly ([0033]: If the camera 112 or some other means of monitoring the distance to the user from the handheld device 100 that can generate an image of the user is used; then it is further possible for the handheld device 100 to ensure that the test is being undertaken correctly).
Regarding claim 10, the combination Limon (US20210401282A1) , MADDALENA (WO-0200105-A1) teaches The method of claim 8.
but does not teach wherein determining whether the user is correcting following the instructions using the camera of the mobile device comprises capturing an image of the user using the camera of the mobile device and determining whether the user is covering an appropriate one of the user's eyes based on the image of the user.
Bartlett (US 20120050685 A1) teaches wherein determining whether the user is correcting following the instructions ([0033]: ensure that the test is being undertaken correctly) using the camera (Fig. 1a: camera 112) of the mobile device (Fig. 1a: handheld device 100) comprises capturing an image of the user ([0033]: the camera 112 can be used to monitor the user) using the camera (Fig. 1a: camera 112) of the mobile device (Fig. 1a: handheld device 100) and determining whether the user is covering an appropriate one of the user's eyes ([0033]: ensure that the user has the correct eye covered for each test) based on the image of the user ([0033]: If the camera 112 or some other means of monitoring the distance to the user from the handheld device 100 that can generate an image of the user is used; then it is further possible for the handheld device 100 to ensure that the test is being undertaken correctly).
It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to have modified claim 8 with the camera-based monitoring taught by Bartlett (US 20120050685 A1) to ensure the test is undertaken correctly ([0033]: If the camera 112 or some other means of monitoring the distance to the user from the handheld device 100 that can generate an image of the user is used; then it is further possible for the handheld device 100 to ensure that the test is being undertaken correctly).
Regarding claim 12, the combination Limon (US20210401282A1) , MADDALENA (WO-0200105-A1) teaches The method of claim 1. but does not teach detecting when the user is at a suitable distance from the display for determining performing an acuity test.
Bartlett (US 20120050685 A1) teaches detecting when the user is at a suitable distance from the display for determining performing an acuity test (([0031]: the camera 112 may take a picture or even continuously monitor a video sequence of the user and make appropriate measurements from the image to ensure that the user is at an acceptable distance from the handheld device 100 during a vision test, [0039] provide a acuity check).
It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to have modified claim 1 with the camera taught by Bartlett (US 20120050685 A1) to ensure the user is at the correct distance for accurate testing ([0033]: The handheld device 100 may not continue operation of the testing until the user has positioned himself or herself at an appropriate distance so that accurate and reliable testing is substantially ensured).
Regarding claim 13, the combination Limon (US20210401282A1) , MADDALENA (WO-0200105-A1) and Bartlett (US 20120050685 A1) teaches The method of claim 12.
Bartlett (US 20120050685 A1) further teaches the method comprising detecting when the user has moved away from the suitable distance from the display ([0031]: the camera 112 may take a picture or even continuously monitor a video sequence of the user and make appropriate measurements from the image to ensure that the user is at an acceptable distance from the handheld device 100).
10. Claim(s) 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over the combination Limon (US20210401282A1) , MADDALENA (WO-0200105-A1) in view of Bartlett (US 20120050685 A1) in view of Drozdov (WO 2020110121 A1).
Regarding claim 11, the combination Limon (US20210401282A1) , MADDALENA (WO-0200105-A1) and Bartlett (US 20120050685 A1) teaches The method of claim 10.
But does not teach wherein determining whether the user is covering an appropriate one of the user's eyes based on the image of the user comprises using an artificial intelligence-based classifier.
Drozdov (WO 2020110121 A1) teaches wherein determining whether the user is covering an appropriate one of the user's eyes (Fig. 3: eye detection 302) ([00035]: a plurality of images of the user interactions with the screen in identifying and focusing on points of reference 801 i are captured 301, after which, two separate algorithms for eye region localization/detection 302 and face detection 303 are employed), based on the image of the user (Fig. 3: image 301) comprises using an artificial intelligence-based classifier ([00036]: Other methods of eye region localization can be employed, for example: using edge projection (GPF) and support vector machines (SVMs) to classify estimates of eye centers… and verify the remaining configurations using two SVM classifiers, and using an eye detector to validate the presence of a face and to initialize an eye locator, which, in turn, refines the position of the eye using the SVM on optimally selected Haar wavelet coefficients). A support vector machine (SVM) classifier is a supervised machine learning algorithm that is used to train artificial intelligence models.
It would have been obvious to a person having ordinary skill in the art to utilize eye localization/detection together with face detection ([00035]: algorithms for eye region localization/detection 302 and face detection 303) to determine which of the user’s eyes are covered.
Furthermore, it would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to have modified claim 10 with the artificial intelligence-based classifier taught by Drozdov (WO 2020110121 A1) to better identify the position of the user’s eye when determining if the correct eye is closed ([00036]: refines the position of the eye using the SVM).
11. Claim(s) 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Bartlett (US 20120050685 A1) in view of Maier (US 20190021588 A1).
Regarding claim 16, Bartlett (US 20120050685 A1) teaches A method for determining when a user is correctly following instructions ([0033]: it is further possible for the handheld device 100 to ensure that the test is being undertaken correctly) in a vision test ([0031]: the handheld device 100 is used to provide a vision test), whereby the user is able to respond via a tactile input (Fig. 1a: touching the display 104) to a mobile device being held in the user’s hand ([0028]: The display 104 may include touch-screen and/or multi-touch capability so that the handheld device 100 may be controlled by touching the display 104), or via an audio input (Fig. 1a: microphone 122) to the mobile device using their voice ([0028]: The handheld device 100 may also include a microphone 122 and audio processing capability to so that voice or sound commands may be used to control it).
Bartlett (US 20120050685 A1) does not teach the vision test is computer conducted on a computer which is separate from the mobile device being held in the user’s hand, or that the user can respond to the computer via the mobile device or via an audio input to the computer.
Maier (US 20190021588 A1) teaches a computer (Fig. 3: remote server 17) conducted vision test ([0001]: a method for self-examination of a user's eye) whereby the user is able to respond to the computer via a tactile input ([0066]: the smartphone 21 will communicate via its interface 21b wirelessly over a network/internet with the projector 12 and the remote server 17).
It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to have modified Bartlett (US 20120050685 A1) with the separate computer and mobile devices taught by Maier (US 20190021588 A1) to better facilitate testing, and because Bartlett (US 20120050685 A1) teaches an external host computer may be connected to the handheld device via interface port 118 ([0029]: Interface port 118 allows the handheld device 100 to be connected to an external host computer… The wired and or wireless connectivity of the handheld device 100 allows it to send information to a… host computer, or other computing device; and also allows for… test protocols… to be sent from a host computer or other data processing device or interface to the handheld device 100), a person having ordinary skill in the art would be capable of making this modification.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Huang (WO 2019099952 A1) teaches about a mobile phone-based acuity test and axis test
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to PINPING SUN whose telephone number is (571)270-1284. The examiner can normally be reached 9-5.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/PINPING SUN/ Supervisory Patent Examiner, Art Unit 2872