Prosecution Insights
Last updated: April 19, 2026
Application No. 17/891,736

DIGITAL CONTACT LENS INSERTION AND REMOVAL AID

Final Rejection §103
Filed
Aug 19, 2022
Examiner
WRIGHT, ANDREW RUSSELL
Art Unit
2872
Tech Center
2800 — Semiconductors & Electrical Systems
Assignee
Johnson & Johnson Vision Care Inc.
OA Round
2 (Final)
55%
Grant Probability
Moderate
3-4
OA Rounds
3y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 55% of resolved cases
55%
Career Allow Rate
11 granted / 20 resolved
-13.0% vs TC avg
Strong +50% interview lift
Without
With
+50.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
35 currently pending
Career history
55
Total Applications
across all art units

Statute-Specific Performance

§103
68.0%
+28.0% vs TC avg
§102
16.3%
-23.7% vs TC avg
§112
14.3%
-25.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 20 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment Claims 13, 16 and 35 are amended, claims 1-12 and 39-43 are withdrawn, and claims 27-34 are canceled. Response to Arguments Applicant’s arguments with respect to claims 13 and 35 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. Claim 13-14 and 16-17 are rejected under 35 U.S.C. 103 as being unpatentable over Sabeta (US 20070274626 A1) (paragraph [0090]) in view of Sabeta (US 20070274626 A1) (paragraph [0084]). Regarding claim 13, Sabeta discloses in at least an embodiment in paragraph [0090], a method for contact lens insertion or removal assistance (processing the coordinates of the alignment means on the contact lens 10 to detect the position of the contact lens 10 as installed on the eye based on an objective decision paragraph [0090]), the method comprising: capturing one or more first images (photographing an anterior segment of the eye paragraph [0090]) of at least a portion of a face of the user (anterior segment of the eye paragraph [0090]) by an imaging device (photographing means paragraph [0090]), wherein the imaging device (photographing means paragraph [0090]) includes a user interface (display means paragraph [0090]) and is configured to capture images of the face of the user while viewing (the eye may be photographed while being exposed to light of varying intensities or illuminance, to provide responsive images of the eye showing the pupil, iris and screla, on a display means paragraph [0090]) the user interface (display means paragraph [0090]), wherein the one or more first images (photographing an anterior segment of the eye paragraph [0090]) comprise a representation of an eye of the user (anterior segment of the eye paragraph [0090]); capturing one or more second images (the acquired images of eye and the lens paragraph [0090]) of at least a portion of the face of the user (eye paragraph [0090]) by the imaging device (photographing means paragraph [0090]), wherein the one or more second images (the acquired images of eye and the lens paragraph [0090]) comprise a representation of the eye of the user (eye paragraph [0090]) and a representation of a contact lens (lens paragraph [0090]) on the eye of the user (the actual prescribed lens 10 is properly placed or oriented within the eye paragraph [0090]); analyzing, based on the one or more second images (the acquired images of eye and the lens paragraph [0090]), placement of the contact lens on the eye of the user (this system 23 to ensure that the actual prescribed lens 10 is properly placed or oriented within the eye paragraph [0090]); and outputting, via the user interface (display means paragraph [0090]), an indication of proper placement or improper placement of the contact lens (the actual prescribed lens 10 is properly placed or oriented within the eye according the prescription fitting details, using the advisory signals outputted by the system 23 paragraph [0090]) based on the analyzing placement of the contact lens (orientation of the lens paragraph [0090]). Sabeta does not disclose in paragraph [0090], causing, via the user interface and based on the one or more first images, output of a visual clue indicative of a user state of the user relative to the eye of the user, wherein the visual clue has a first state indicative of a rejected user state and a second state indicative of an accepted user state; causing, via the user interface and in response to an accepted user state, output of one or more user instructions for insertion of a contact lens. However Sabeta discloses in at least embodiment of paragraph [0084], causing, via the user interface (display 56 fig. 6) and based on the one or more first images (first image taught above by the embodiment of paragraph [0090]), output of a visual clue (confirmatory message or warning message is provided to the user, either visually or auditorily after position is detected by the reader paragraph [0084] which can be used to output an image of the lens 10 on a display 56 and can show the orientation of the lens 10 with respect to the eye of a user paragraph [0089]) indicative of a user state of the user relative to the eye of the user (when the contact lens 10 is properly oriented for insertion paragraph [0084]), wherein the visual clue (confirmatory message or warning message is provided to the user, either visually or auditorily paragraph [0084]) has a first state indicative of a rejected user state (the inverted state issues a waring message paragraph [0084]) and a second state indicative of an accepted user state (when the contact lens 10 is properly oriented for insertion paragraph [0084]); causing, via the user interface (display 56 fig. 6) and in response to an accepted user state (when the contact lens 10 is properly oriented for insertion paragraph [0084]), output of one or more user instructions for insertion of a contact lens (the reader 34 thus processes the data from any of the tags 20, 22 or 25 to determine the current orientation of the lens 10 with respect to the user's eye, and provide feedback to the user on how to proceed, such as, to proceed with insertion when the lens 10 is properly oriented paragraph [0088] and In another embodiment, the reader 34, as described above, outputs an image of the lens 10 on a display 56 and can show the orientation of the lens 10 with respect to the eye of a user paragraph [0089]). Therefore it would be obvious for one skilled in the art before the effective filling date of the claimed invention to use the reader to output an image to determine the orientation of the lens and output feedback to proceed with insertion of the lens as taught by Sabeta paragraph [0084] in the method of paragraph [0090] which then uses a second image of the lens and the eye to ensure the lens is properly placed within the eye. The methods are different embodiments of the same invention and can be used together in a process to ensure the current insertion of a contact lens. Regarding claim 14, The combination of Sabeta paragraphs [0090] and [0084] discloses all the limitations of claim 13 and further Sabeta paragraphs [0090] further discloses, wherein the one or more first images (photographing an anterior segment of the eye paragraph [0090]) comprise a representation of an iris of the eye of the user (the eye may be photographed to provide responsive images of the eye showing the iris paragraph [0090]). Regarding claim 16, The combination of Sabeta paragraphs [0090] and [0084] discloses all the limitations of claim 13 and further Sabeta paragraphs [0090] further discloses, further comprising providing, via the user interface (display means paragraph [0090]) and based on the one or more first images (photographing an anterior segment of the eye paragraph [0090]), output prompts to the user until an accepted user state is detected (the lens wearer can use this system 23 to ensure that the actual prescribed lens 10 is properly placed or oriented within the eye according the prescription fitting details, using the advisory signals outputted by the system 23 paragraph [0090]). Regarding claim 17, The combination of Sabeta paragraphs [0090] and [0084] discloses all the limitations of claim 13. Sabeta paragraph [0090] does not disclose, wherein the one or more user instructions for insertion of a contact lens comprise an audio instruction, a visual instruction, or both. However Sabeta paragraph [0084] further discloses, wherein the one or more user instructions for insertion of a contact lens comprise an audio instruction (the reader 34 thus processes the data from any of the tags 20, 22 or 25 to determine the current orientation of the lens 10 with respect to the user's eye, and provide feedback to the user on how to proceed, such as, to proceed with insertion when the lens 10 is properly oriented paragraph [0088] and In another embodiment, the reader 34, as described above, outputs an image of the lens 10 on a display 56 and can show the orientation of the lens 10 with respect to the eye of a user paragraph [0089]) comprise an audio instruction, a visual instruction, or both (when the contact lens 10 is properly oriented for insertion a confirmatory message can be provided to the user, either visually or auditorily paragraph [0084]). Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over Sabeta (US 20070274626 A1) (paragraph [0090]) in view of Sabeta (US 20070274626 A1) (paragraph [0084]) as applied to claim 13 above and in view of Bazin (US 20220261085 Al). Regarding claim 15, The combination of Sabeta paragraphs [0090] and [0084] discloses all the limitations of claim 13. Sabeta paragraph [0090] does not disclose, wherein the user state comprises one or more of an openness of the eye of the user, a characteristic of another eye of the user, and a finger placement of the user relative to the eye. However Bazin discloses in at least figure 2, wherein the user state comprises one or more of an openness of the eye of the user (the measuring eye is open paragraph [0041]), a characteristic of another eye of the user (the other eye is closed paragraph [0041]), and a finger placement of the user relative to the eye (the 3d coordinates of the finger and the measuring eye are determined paragraph [0041]). Bazin further teaches (paragraphs [0041]): "At block 250, the method 200 obtains the 3D coordinates of the finger or fingertip and the 3D coordinates of a "measuring" eye of the user of the electronic device. In some implementations, the 3D coordinates of the fingertip are determined using the same techniques used to detect the finger state at block 240 or to detect the finger at block 230. In some implementations, the 3D coordinates of the fingertip may be determined when the finger state is detected at block 240 or when the finger is detected at block 230. In some implementations, the measuring eye is determined by an inward facing image sensor at the electronic device. For example, when one eye is open and the other eye is closed, an image from the inward facing image sensor determines the open eye to be the measuring eye. In some implementations, preset information is used to determine the measuring eye. For example, the dominant eye may be preset as the measuring eye of the user. Alternatively, the measuring eye is preset in a registration process of the measurement capability on the electronic device. In some implementations, the measuring eye has a known spatial relationship to the electronic device. Then, at block 250 the method 200 computes a line of sight (LOS) ray (e.g., a 3D line) extending from the 3D coordinates of the measuring eye through the 3D coordinates of the fingertip into the 3D environment." Therefore it would be obvious for one skilled in the art before the effective filling date of the claimed invention to determine the position of the finger and open eye as taught by Bazin in the contact assistance method of Sabeta. One would have been motivated to determine the position of the finger and open eye because Bazin teaches that a line of sight can be calculated can be calculated with the positions of the finger and open eye (Bazin paragraph [0041]). Claim 18 and 22 are rejected under 35 U.S.C. 103 as being unpatentable over Sabeta (US 20070274626 A1) (paragraph [0090]) in view of Sabeta (US 20070274626 A1) (paragraph [0084]) as applied to claim 13 above in view of Kim (KR 101164786 B1). Regarding claim 18, The combination of Sabeta paragraphs [0090] and [0084] discloses all the limitations of claim 13. Sabeta paragraph [0090] does not disclose wherein analyzing, based on the one or more second images, placement of the contact lens on the eye of the user comprises using an edge detection model to detect an edge of the contact lens. However Kim discloses in at least figure 8, wherein analyzing, based on the one or more second images (second image of the monochrome subject eye paragraph [0021] of translation), placement of the contact lens (boundary surface K of contact lens paragraph [0021] of translation) on the eye of the user (monochrome image of the subject's eye paragraph [0021] of translation) comprises using an edge detection model (general edge search algorithm paragraph [0021] of translation) to detect an edge of the contact lens (In determining the above boundary point (Ki), a general edge search algorithm can be used [0021] of translation). Therefore it would be obvious for one skilled in the art before the effective filling date of the claimed invention to use the edge detection model as taught by Kim in the contact lens assistance method of Sabeta. The edge detection method allows the position of the contact lens to be adjusted to the center of the pupil (paragraph [0021] of translation). Regarding claim 22, The combination of Sabeta paragraphs [0090] and [0084] discloses all the limitations of claim 13. Sabeta paragraph [0090] does not disclose wherein analyzing, based on the one or more second images, placement of the contact lens on the eye of the user comprises using a mask for edge detection to detect an edge of the contact lens. However Kim discloses in at least figure 8, wherein analyzing, based on the one or more second images (second image of the monochrome subject eye paragraph [0021] of translation), placement of the contact lens (boundary surface K of contact lens paragraph [0021] of translation) on the eye of the user (monochrome image of the subject's eye paragraph [0021] of translation) comprises using a mask for edge detection to detect an edge of the contact lens (a general edge search algorithm, such as zero crossing, is applied to a profile obtained by convolving the gradient mask on the radiation lines directed outward from the pupil, thereby detecting two edges paragraph [0018] of translation). Therefore it would be obvious for one skilled in the art before the effective filling date of the claimed invention to use the edge detection model as taught by Kim in the contact lens assistance method of Sabeta. The edge detection method allows the position of the contact lens to be adjusted to the center of the pupil (paragraph [0021] of translation). Claim 19-21 are rejected under 35 U.S.C. 103 as being unpatentable over Sabeta (US 20070274626 A1) (paragraph [0090]) in view of Sabeta (US 20070274626 A1) (paragraph [0084]) as applied to claim 13 above in view of Fartukov (KR 20180006284 A). Regarding claim 19, The combination of Sabeta paragraphs [0090] and [0084] discloses all the limitations of claim 13. Sabeta paragraph [0090] does not disclose, wherein analyzing, based on the one or more second images, placement of the contact lens on the eye of the user comprises pre-processing the one or more second images using one or more of cropping, masking, removal, or a combination thereof. However Fartukov discloses in at least figure 4, wherein analyzing, based on the one or more second images (normalized iris image 410 fig. 4), placement of the contact lens on the eye of the user(the iris image can be used in mobile devices associated with contact lenses paragraph [0004] of translation) comprises pre-processing the one or more second images (normalized iris image 410 fig. 4) using removal (a process of removing obstacles such as eyelids and eyelashes from the iris image is necessary paragraph [0027] of translation). Fartukov further teaches (paragraph [0027] of translation): "Figure 4 shows an example of iris image normalization. On the left side of Fig. 4, an annular iris image (400) is shown. And on the right side of Fig. 4, a normalized iris image (410) of an annular iris image is shown. As in the example of the normalized iris image (410) of Fig. 4, if part of the iris is covered by the eyelids or eyelashes, the performance of iris recognition may deteriorate. Therefore, a process of removing obstacles such as eyelids and eyelashes from the iris image is necessary. In this specification, the area corresponding to an obstruction such as an eyelid or eyelash is defined as a non-iris object area. Conversely, the area corresponding to the iris is defined as the iris object area." Therefore it would be obvious for one skilled in the art before the effective filling date of the claimed invention to remove eyelashes from the image as taught by Fartukov in the method of contact assistance of Sabeta. One would have been motivated to perform the step of removing the eyelashes from the image because Fartukov teaches that if part of the iris is covered by the eyelids or eyelashes, the performance of iris recognition may deteriorate (Fartukov paragraphs [0027] of translation). Regarding claim 20, The combination of Sabeta paragraphs [0090] and [0084] discloses all the limitations of claim 13. Sabeta paragraph [0090] does not disclose, wherein analyzing, based on the one or more second images, placement of the contact lens on the eye of the user comprises pre-processing the one or more second images by digitally removing an artifact from the one or more second images. However Fartukov discloses in at least figure 4, wherein analyzing, based on the one or more second images (normalized iris image 410 fig. 4), placement of the contact lens on the eye of the user(the iris image can be used in mobile devices associated with contact lenses paragraph [0004] of translation) comprises pre-processing the one or more second images (normalized iris image 410 fig. 4) by digitally removing an artifact from the one or more second images (a process of removing obstacles such as eyelids and eyelashes from the iris image is necessary paragraph [0027] of translation). Fartukov further teaches (paragraph [0027] of translation): "Figure 4 shows an example of iris image normalization. On the left side of Fig. 4, an annular iris image (400) is shown. And on the right side of Fig. 4, a normalized iris image (410) of an annular iris image is shown. As in the example of the normalized iris image (410) of Fig. 4, if part of the iris is covered by the eyelids or eyelashes, the performance of iris recognition may deteriorate. Therefore, a process of removing obstacles such as eyelids and eyelashes from the iris image is necessary. In this specification, the area corresponding to an obstruction such as an eyelid or eyelash is defined as a non-iris object area. Conversely, the area corresponding to the iris is defined as the iris object area." Therefore it would be obvious for one skilled in the art before the effective filling date of the claimed invention to remove eyelashes from the image as taught by Fartukov in the method of contact assistance of Fan. One would have been motivated to perform the step of removing the eyelashes from the image because Fartukov teaches that if part of the iris is covered by the eyelids or eyelashes, the perform a nee of iris recognition may deteriorate (Fartukov paragraphs [0027] of translation). Regarding claim 21, The combination of Sabeta paragraphs [0090] and [0084] discloses all the limitations of claim 13. Sabeta paragraph [0090] does not disclose, wherein the artifact comprises an eyelash of the user. However Fartukov further discloses, wherein the artifact comprises an eyelash of the user (a process of removing obstacles such as eyelids and eyelashes from the iris image is necessary paragraph [0027] of translation). Fartukov further teaches (paragraph [0027] of translation): "Figure 4 shows an example of iris image normalization. On the left side of Fig. 4, an annular iris image (400) is shown. And on the right side of Fig. 4, a normalized iris image (410) of an annular iris image is shown. As in the example of the normalized iris image (410) of Fig. 4, if part of the iris is covered by the eyelids or eyelashes, the perform a nee of iris recognition may deteriorate. Therefore, a process of removing obstacles such as eyelids and eyelashes from the iris image is necessary. In this specification, the area corresponding to an obstruction such as an eyelid or eyelash is defined as a non-iris object area. Conversely, the area corresponding to the iris is defined as the iris object area." Therefore it would be obvious for one skilled in the art before the effective filling date of the claimed invention to remove eyelashes from the image as taught by Fartukov in the method of contact assistance of Fan. One would have been motivated to perform the step of removing the eyelashes from the image because Fartukov teaches that if part of the iris is covered by the eyelids or eyelashes, the perform a nee of iris recognition may deteriorate (Fartukov paragraphs [0027] of translation). Claims 23-26 are rejected under 35 U.S.C. 103 as being unpatentable over Sabeta (US 20070274626 A1) (paragraph [0090]) in view of Sabeta (US 20070274626 A1) (paragraph [0084]) as applied to claim 13 above in view of Awdeh (US 10073515 B2). Regarding claim 23, The combination of Sabeta paragraphs [0090] and [0084] discloses all the limitations of claim 13. Sabeta paragraph [0090] does not disclose, wherein the one or more first images are captured from a video. However Awdeh discloses in at least figure 1, wherein the one or more first images (patient images from image source 3 col. 22 lines 17-18) are captured from a video (the system comprises an image acquisition device configured to generate substantially real-time digital video data representing patient images of an eye of the patient col. 2 lines 10-13). Awdeh further teaches (col. 23 lines 25-30): "Systems and methods embodying principles of the present disclosure may provide a surgeon with the benefit of having real-time data superimposed over the visual surgical field. Additionally, it may give the surgeon the ability to perform camera-based operations with three-dimensional visualization, as compared to current two-dimensional technologies" Therefore it would be obvious for one skilled in the art before the effective filling date of the claimed invention to use real time video as taught by Awdeh in the contact lens assistance method of Sabeta. One would have been motivated to use real time video because Awdeh teaches that the real time video is a benefit and can provide three-dimensional visualization (Awdeh col. 23 lines 25-30). Regarding claim 24, The combination of Sabeta paragraphs [0090] and [0084] discloses all the limitations of claim 13. Sabeta paragraph [0090] does not disclose, wherein the one or more second images are captured from a video. However Awdeh discloses in at least figure 1, wherein the one or more second images (the overlaid image merges video data of the patient image with external data col. 22 lines 34-37) are captured from a video (the system comprises an image acquisition device configured to generate substantially real-time digital video data representing patient images of an eye of the patient col. 2 lines 10-13). Awdeh further teaches (col. 23 lines 25-30): "Systems and methods embodying principles of the present disclosure may provide a surgeon with the benefit of having real-time data superimposed over the visual surgical field. Additionally, it may give the surgeon the ability to perform camera-based operations with three-dimensional visualization, as compared to current two-dimensional technologies" Therefore it would be obvious for one skilled in the art before the effective filling date of the claimed invention to use real time video as taught by Awdeh in the contact lens assistance method of Sabeta. One would have been motivated to use real time video because Awdeh teaches that the real time video is a benefit and can provide three-dimensional visualization (Awdeh col. 23 lines 25-30). Regarding claim 25, The combination of Sabeta paragraphs [0090] and [0084] discloses all the limitations of claim 13. Sabeta paragraph [0090] does not disclose, wherein the one or more first images are captured from a real time video. However Awdeh discloses in at least figure 1, wherein the one or more first images (patient images from image source 3 col. 22 lines 17-18) are captured from a real-time video (the system comprises an image acquisition device configured to generate substantially real-time digital video data representing patient images of an eye of the patient col. 2 lines 10-13). Awdeh further teaches (col. 23 lines 25-30): "Systems and methods embodying principles of the present disclosure may provide a surgeon with the benefit of having real-time data superimposed over the visual surgical field. Additionally, it may give the surgeon the ability to perform camera-based operations with three-dimensional visualization, as compared to current two-dimensional technologies" Therefore it would be obvious for one skilled in the art before the effective filling date of the claimed invention to use real time video as taught by Awdeh in the contact lens assistance method of Sabeta. One would have been motivated to use real time video because Awdeh teaches that the real time video is a benefit and can provide three-dimensional visualization (Awdeh col. 23 lines 25-30). Regarding claim 26, The combination of Sabeta paragraphs [0090] and [0084] discloses all the limitations of claim 13. Sabeta paragraph [0090] does not disclose, wherein the one or more second images are captured from a real time video. However Awdeh discloses in at least figure 1, wherein the one or more second images (the overlaid image merges video data of the patient image with external data col. 22 lines 34-37) are captured from a real-time video (the system comprises an image acquisition device configured to generate substantially real time digital video data representing patient images of an eye of the patient col. 2 lines 10-13). Awdeh further teaches (col. 23 lines 25-30): "Systems and methods embodying principles of the present disclosure may provide a surgeon with the benefit of having real-time data superimposed over the visual surgical field. Additionally, it may give the surgeon the ability to perform camera-based operations with three-dimensional visualization, as compared to current two-dimensional technologies" Therefore it would be obvious for one skilled in the art before the effective filling date of the claimed invention to use real time video as taught by Awdeh in the contact lens assistance method of Sabeta. One would have been motivated to use real time video because Awdeh teaches that the real time video is a benefit and can provide three-dimensional visualization (Awdeh col. 23 lines 25-30). Claims 35-36 and 38 are rejected under 35 U.S.C. 103 as being unpatentable Sabeta (US 20070274626 A1) (paragraph [0090]) in view of Sabeta (US 20070274626 A1) (paragraph [0084]) in view of Awdeh (US 10073515 B2) and Fukuda (US 20180234415 Al). Regarding claim 35, Sabeta discloses in at least an embodiment in paragraph [0090], a method for contact lens insertion or removal assistance (processing the coordinates of the alignment means on the contact lens 10 to detect the position of the contact lens 10 as installed on the eye based on an objective decision paragraph [0090]), the method comprising: an imaging device (photographing means paragraph [0090]), wherein the imaging device (photographing means paragraph [0090]) includes a user interface (display means paragraph [0090]) and is configured to capture images of the face of the user while viewing (the eye may be photographed while being exposed to light of varying intensities or illuminance, to provide responsive images of the eye showing the pupil, iris and screla, on a display means paragraph [0090]) the user interface (display means paragraph [0090]), analyzing a user state of the user to determine whether the user state is an accepted user state (the system 23 ensures the lens is properly placed or oriented within the eye using the advisory signals paragraph [0090] which can be indicative of the orientation of the device with respect to the desired application site paragraph [0020]) or a rejected user state (the system 23 ensures the lens is properly placed or oriented within the eye using the advisory signals paragraph [0090] which can be an aid to correct the rotation or orientation of the device for placement in the preferential orientation of the lens paragraph [0020]), causing via the user interface (display means paragraph [0090]), output of a visual clue (advisory signals outputted by the system 23 paragraph [0090] may be visual paragraph [0089]) indicative of the user state of the user relative to the eye of the user (properly placed or oriented within the eye paragraph [0090]), wherein the visual clue (advisory signals outputted by the system 23 paragraph [0090] may be visual paragraph [0089]) has a first state indicative of a rejected user state (the system 23 ensures the lens is properly placed or oriented within the eye using the advisory signals paragraph [0090] which can be an aid to correct the rotation or orientation of the device for placement in the preferential orientation of the lens paragraph [0020]) and a second state indicative of an accepted user state (the system 23 ensures the lens is properly placed or oriented within the eye using the advisory signals paragraph [0090] which can be indicative of the orientation of the device with respect to the desired application site paragraph [0020]). Sabeta does not disclose, capturing a real time video of at least a portion of a face of the user, wherein the video comprises a representation of an eye of the user, analyzing, based at least on video a user state of the user to determine whether the user state is an accepted user state or a rejected user state causing via the user interface, based on the video output of a visual clue indicative of the user state of the user relative to the eye of the user, causing via the user interface and based on the user state, output one or more real-time prompts to be outputted to the user, repeating steps a-d until the user state is determined to be an accepted user state. However Sabeta discloses in an embodiment in paragraph [0084], causing via the user interface (display 56 fig. 6) and based on the user state (when the contact lens 10 is properly oriented for insertion paragraph [0084]), output one or more real-time prompts to be outputted to the user (the reader 34 thus processes the data from any of the tags 20, 22 or 25 to determine the current orientation of the lens 10 with respect to the user's eye, and provide feedback to the user on how to proceed, such as, to proceed with insertion when the lens 10 is properly oriented paragraph [0088] and In another embodiment, the reader 34, as described above, outputs an image of the lens 10 on a display 56 and can show the orientation of the lens 10 with respect to the eye of a user paragraph [0089]). Therefore it would be obvious for one skilled in the art before the effective filling date of the claimed invention to use the reader to output an image to determine the orientation of the lens and output feedback to proceed with insertion of the lens as taught by Sabeta paragraph [0084] in the method of paragraph [0090] which then uses image of the lens and the eye to ensure the lens is properly placed within the eye. The methods are different embodiments of the same invention and can be used together in a process to ensure the current insertion of a contact lens. Additionally, Awdeh discloses in at least figure 1, capturing a real time video of at least a portion of a face of the user (the system comprises an image acquisition device configured to generate substantially real-time digital video data representing patient images of an eye of the patient col. 2 lines 10-13), wherein the video comprises a representation of an eye of the user (patient images of an eye of the patient col. 2 lines 10-13), analyzing, based at least on video (the system comprises an image acquisition device configured to generate substantially real-time digital video data col. 2 lines 10-13) causing, based on the video output (the system comprises an image acquisition device configured to generate substantially real-time digital video data col. 2 lines 10-13) of a visual clue indicative of the user state of the user relative to the eye of the user (visual cue and user state taught above by Sabeta), causing based on the video (the system comprises an image acquisition device configured to generate substantially real-time digital video data col. 2 lines 10-13) and based on the user state, output one or more real-time prompts to be outputted to the user (user state and prompts taught above by Sabeta). Awdeh further teaches (col. 23 lines 25-30): "Systems and methods embodying principles of the present disclosure may provide a surgeon with the benefit of having real-time data superimposed over the visual surgical field. Additionally, it may give the surgeon the ability to perform camera-based operations with three-dimensional visualization, as compared to current two-dimensional technologies" Therefore it would be obvious for one skilled in the art before the effective filling date of the claimed invention to use real time video as taught by Awdeh in the contact lens assistance method of Sabeta. One would have been motivated to use real time video because Awdeh teaches that the real time video is a benefit and can provide three-dimensional visualization (Awdeh col. 23 lines 25-30). Further Fukuda discloses in at least figure 22, repeating steps a-d until the user state is determined to be an accepted user state (this process is repeated until the wearing position determination unit 34 determines that the wearing position is the position at which authentication is to be performed paragraph [0200]). Therefore it would be obvious for one skilled in the art before the effective filling date of the claimed invention to repeat the steps as taught by Fukuda in the method of Sabeta. The steps are repeated until the authentication can be performed. Regarding claim 36, t the combination of Sabeta paragraphs [0090] and [0084], Awdeh and Fukuda discloses all the limitations of claim 35 and Sabeta paragraph [0090] further discloses, wherein the accepted user state comprise a properly inserted contact lens on the eye of the user (the system 23 ensures the lens is properly placed or oriented within the eye using the advisory signals paragraph [0090] which can be indicative of the orientation of the device with respect to the desired application site paragraph [0020]). Regarding claim 38, the combination of Sabeta paragraphs [0090] and [0084], Awdeh and Fukuda discloses all the limitations of claim 35. Sabeta paragraph [0090] does not discloses, wherein the one or more real-time prompts comprise instruction for insertion or removal of a contact lens. However Sabeta paragraph [0084] further discloses, wherein the one or more real-time prompts comprise instruction for insertion or removal of a contact lens (the reader 34 thus processes the data from any of the tags 20, 22 or 25 to determine the current orientation of the lens 10 with respect to the user's eye, and provide feedback to the user on how to proceed, such as, to proceed with insertion when the lens 10 is properly oriented paragraph [0088] and In another embodiment, the reader 34, as described above, outputs an image of the lens 10 on a display 56 and can show the orientation of the lens 10 with respect to the eye of a user paragraph [0089]). Claim 37 is rejected under 35 U.S.C. 103 as being unpatentable over Sabeta (US 20070274626 A1) (paragraph [0090]) in view of Sabeta (US 20070274626 A1) (paragraph [0084]) A1), Awdeh (US 10073515 B2) and Fukuda (US 20180234415 Al) as applied to claim 35 above and in further view of Santos-Villalobos et al. (US 20160335474 Al). Regarding claim 37, the combination of Sabeta paragraphs [0090] and [0084], Awdeh and Fukuda discloses all the limitations of claim 35. Sabeta paragraph [0090] does not disclose, wherein the accepted user state comprise a properly removed contact lens on the eye of the user. However Santos-Villalobos discloses in at least figure 12, wherein the accepted user state (no contact lens is covering the iris paragraph [0147]) comprise a properly removed contact lens on the eye of the user (a remove contact lens warring is displayed when a contact lens is covering the iris paragraph [0147]). Therefore it would be obvious for one skilled in the art before the effective filling date of the claimed invention to use an accepted user state when the contact is removed as taught by Santos-Villalobos in the method of Sabeta. The contact lens detection can be used to determine if the iris is covered. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Katz et al. (US 20200103980 A1) discloses a method for triggering actions based on touch free gestures with contacts and images of the eyes that are processed for information. THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANDREW R WRIGHT whose telephone number is (703)756-5822. The examiner can normally be reached Mon-Thurs 7:30-5 Friday 8-12. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Pinping Sun can be reached at 1-571-270-1284. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ANDREW R WRIGHT/ Examiner, Art Unit 2872 /PINPING SUN/ Supervisory Patent Examiner, Art Unit 2872
Read full office action

Prosecution Timeline

Aug 19, 2022
Application Filed
Oct 03, 2025
Non-Final Rejection — §103
Nov 21, 2025
Response Filed
Feb 20, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12601858
LIGHT CONTROL FILM
2y 5m to grant Granted Apr 14, 2026
Patent 12585165
OPTICAL ELEMENT DRIVING MECHANISM
2y 5m to grant Granted Mar 24, 2026
Patent 12566492
OCULAR ANOMALY DETECTION VIA CONCURRENT PRESENTATION OF STIMULI TO BOTH EYES
2y 5m to grant Granted Mar 03, 2026
Patent 12474553
Zoom Lens, Camera Module, and Mobile Terminal
2y 5m to grant Granted Nov 18, 2025
Patent 12429664
CAMERA MODULE
2y 5m to grant Granted Sep 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
55%
Grant Probability
99%
With Interview (+50.0%)
3y 1m
Median Time to Grant
Moderate
PTA Risk
Based on 20 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month