Prosecution Insights
Last updated: April 19, 2026
Application No. 18/185,454

VIRTUAL AND AUGMENTED REALITY FOR TELEHEALTH

Final Rejection §101§103
Filed
Mar 17, 2023
Examiner
ERICKSON, BENNETT S
Art Unit
3683
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Welch Allyn Inc.
OA Round
4 (Final)
38%
Grant Probability
At Risk
5-6
OA Rounds
3y 7m
To Grant
84%
With Interview

Examiner Intelligence

Grants only 38% of cases
38%
Career Allow Rate
53 granted / 141 resolved
-14.4% vs TC avg
Strong +46% interview lift
Without
With
+45.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
47 currently pending
Career history
188
Total Applications
across all art units

Statute-Specific Performance

§101
32.4%
-7.6% vs TC avg
§103
45.6%
+5.6% vs TC avg
§102
9.5%
-30.5% vs TC avg
§112
10.6%
-29.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 141 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment In the amendment filed on October 27, 2025, the following has occurred: claim(s) 1, 8, 13 have been amended. Now, claim(s) 1-6, 8-9, 11-13, 15-18, 20 are pending. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim(s) 1-6, 8-9, 11-13, 15-18, 20 is/are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Claims 1-6: Step 2A Prong One Claim 1 recite(s): receive a video feed of the patient environment; recognize a local device in the video feed of the patient environment; detect an input directed to the local device; display the video feed of the patient environment; generate a control command based on the input; and send the control command to the local device, wherein the control command causes the local device to perform an action during the telehealth consultation These limitations, as drafted, given the broadest reasonable interpretation, but for the recitation of generic computer components, encompass managing personal behavior or relationships between people (including social activities, teaching, and following rules or instructions) which is a subgrouping of Certain Methods of Organizing Human Activity. That is, other than reciting, “an image forming device including: a display device configured to display a patient environment and a lenticular lens having an array of lenses positioned in front of the display device to generate a three dimensional rendering based on images displayed by the display device, wherein the lenticular lens enables viewing of different aspects of the patient environment from multiple viewing angles by directing different pixels of the display device to different viewing positions as the multiple viewing angels change relative to the display device;” “at least one processing device communicatively coupled to the image forming device;”, “a memory device storing instructions , when executed by the at least one processing device”, “a local device”, “wherein the video feed is displayed in three dimensions by using the lenticular lens of the image forming device, wherein different aspects of the video feed of the patient environment are viewable through the lenticular lens as a viewing angle changes relative to the display device such that changing the viewing angle reveals different pixels of the display device behind the lenticular lens to provide depth perception” to perform these functions, nothing in the claim precludes the limitations from practically being performed by a person following rules or instructions to conduct a telehealth consultation. For example, the claim encompasses a user manually receiving a video feed of the patient environment, a user manually recognizing a local device in the video feed, a user manually displaying the video feed of the patient environment, a user manually detecting an input that is directed to the local device, a user manually determining a control command based on the detected input, and a user manually sending a control command to the local device. Claims 2-6 incorporate the abstract idea identified above and recite additional limitations that expand on the abstract idea, but for the recitation of generic computer components. For example, claims 2-3 further describe recognizing the input. Similarly, claims 4-5 further describe the control command. Finally, claim 6 describes making changes to the video feed. Such steps encompass Certain Methods of Organizing Human Activity. Claims 1-6: Step 2A Prong Two This judicial exception is not integrated into a practical application because the remaining elements amount to no more than general purpose computer components programmed to perform the abstract idea and generally linking the abstract idea to a particular technical environment. Claims 1-6, directly or indirectly, recite the following generic computer components, “at least one processing device communicatively coupled to the image forming device;”, “a memory device storing instructions, when executed by the at least one processing device”, “a local device”, (i.e., "The processing device 304 is an example of a processing unit such as a central processing unit (CPU). The processing device 304 can include one or more CPUs." (See Specification in Paragraph [0029]), "The memory device 306 includes computer-readable media, which may include any media that can be accessed by the patient telehealth device 300. The computer-readable media can include computer readable storage media and computer readable communication media." (See Specification in Paragraph [0030]), ("The image forming device 308 includes a display device 310 that operates to display an image or a video stream." (See Specification in Paragraph [0034])), “FIG. 4 schematically illustrates an example of the image forming device 308 showing a range of viewing angles for the patient P to view different aspects of an image displayed by the image forming device 308. The image forming device 308 can provide depth perception and allow the patient P to perceive an image from different perspectives or viewing angles. As the patient P adjusts their viewing angle, the image forming device 308 allows the patient P to see different pixels displayed by the display device 310 behind the lenticular lens 312.” (See Specification in Paragraph [0036])) As set forth in the 2019 Eligibility Guidance, 84 Fed. Reg. at 55 "merely include[ing] instructions to implement an abstract idea on a computer" is an example of when an abstract idea has not been integrated into a practical application. Additionally, the claims recite "a gesture recognition algorithm", "a speech recognition algorithm", and “an image forming device including: a display device configured to display a patient environment; and a lenticular lens having an array of lenses positioned in front of the display device to generate a three dimensional rendering based on images displayed by the display device, wherein the lenticular lens enables viewing of different aspects of the patient environment from multiple viewing angles by directing different pixels of the display device to different viewing positions as the multiple viewing angles change relative to the display device;”, “wherein the video feed is displayed in three dimensions by using the lenticular lens of the image forming device, wherein different aspects of the video feed of the patient environment are viewable through the lenticular lens as a viewing angle changes relative to the display device such that changing the viewing angle reveals different pixels of the display device behind the lenticular lens to provide depth perception” at a high degree of generality, amount no more than generally linking the abstract idea to a particular technical environment. The recitations are also similar to adding the words "apply it" to the abstract idea. As set forth in MPEP 2106.05(f) merely reciting the words "apply it" or an equivalent, is an example of when an abstract idea has not been integrated into a practical application. Claims 1-6: Step 2B The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of using a computer configured to perform above identified functions amounts to no more than mere instructions to apply the exception using generic computer components. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. See Alice 573 U.S. at 223 ("mere recitation of a generic computer cannot transform a patent-ineligible abstract idea into a patent-eligible invention.") Additionally, generally linking the abstract idea to a particular technological environment does not amount to significantly more than the abstract idea (See MPEP 2106.05(h) and Affinity Labs of Texas v. DirectTV, LLC, 838 F.3d 1253, 120 USP12d 1201 (Fed. Cir. 2016)). Claims 8-9, 11-12 recite the same functions as claims 1-6, but in method form. Therefore, these claims also recite abstract ideas that fall into the Certain Methods of Organizing Human Activity grouping of abstract ideas as explained above. These claims also do not integrate the abstract idea into a practical application for the same reasons as explained above because they merely include instructions to implement the abstract idea on a computer. Therefore, whether considered alone or in combination, the additional elements do not amount to significantly more than the abstract idea. Claims 13, 15-18, 20: Step 2A Prong One Claim 13 recite(s) receive a video feed of the patient; display a three-dimensional image based on the video feed; determine a measurement of a feature in the three-dimensional image, wherein the feature is a wound and the measurement is a size of the wound; and display the feature and the measurement of the feature in the three-dimensional image. These limitations, as drafted, given the broadest reasonable interpretation, but for the recitation of generic computer components, encompass managing personal behavior or relationships between people (including social activities, teaching, and following rules or instructions) which is a subgrouping of Certain Methods of Organizing Human Activity. That is, other than reciting, “an image forming device, the image forming device including: a display device configured to display a patient; and a lenticular lens having an array of lenses positioned in front of the display device to generate a three-dimensional rendering based on images displayed by the display device, wherein the lenticular lens enables viewing of different aspects of the patient from multiple viewing angles by directing different pixels of the display device to different viewing positions as the multiple viewing angels change relative to the display device”, “at least one processing device communicatively coupled to the image forming device”, “a memory device storing instructions which, when executed by the at least one processing device”, “wherein the video feed is displayed in three dimensions by using the lenticular lens of the image forming device, wherein different aspects of the video feed of the patient environment are viewable through the lenticular lens as a viewing angle changes relative to the display device such that changing the viewing angle reveals different pixels of the display device behind the lenticular lens to provide depth perception” to perform these functions, nothing in the claim precludes the limitations from practically being performed by a person following rules or instructions to conduct a telehealth consultation. For example, the claim encompasses a user manually receiving a video feed of the patient environment, a user manually showing an image, a user manually determining a measurement of a feature in the three-dimensional image, and a user manually showing the feature and the measurement of the feature in the three-dimensional image. Claims 15-18, 20 incorporate the abstract idea identified above and recite additional limitations that expand on the abstract idea, but for the recitation of generic computer components. For example, claims 15-16 further describe generic computer components. Similarly, claim 17 further describes the video feed. Finally, claim 20 further describes the image and changing the image. Such steps encompass Certain Methods of Organizing Human Activity. Claims 13, 15-18, 20: Step 2A Prong Two This judicial exception is not integrated into a practical application because the remaining elements amount to no more than general purpose computer components programmed to perform the abstract idea and generally linking the abstract idea to a particular technical environment. Claims 13, 15-18, 20, directly or indirectly, recite the following generic computer components, “at least one processing device communicatively coupled to the image forming device”, “a memory device storing instructions which, when executed by the at least one processing device” (i.e., "The processing device 304 is an example of a processing unit such as a central processing unit (CPU). The processing device 304 can include one or more CPUs." (See Specification in Paragraph [0029]), "The memory device 306 includes computer-readable media, which may include any media that can be accessed by the patient telehealth device 300. The computer-readable media can include computer readable storage media and computer readable communication media." (See Specification in Paragraph [0030]), “FIG. 4 schematically illustrates an example of the image forming device 308 showing a range of viewing angles for the patient P to view different aspects of an image displayed by the image forming device 308. The image forming device 308 can provide depth perception and allow the patient P to perceive an image from different perspectives or viewing angles. As the patient P adjusts their viewing angle, the image forming device 308 allows the patient P to see different pixels displayed by the display device 310 behind the lenticular lens 312.” (See Specification in Paragraph [0036]), and “an image capture device” in claim 15 ("Referring back to FIG. 3, the patient telehealth device 300 further includes an image capture device 314 that operates to capture an image or video feed of the patient P while the patient is using the patient telehealth device 300." (See Specification in Paragraph [0039])). As set forth in the 2019 Eligibility Guidance, 84 Fed. Reg. at 55 "merely include[ing] instructions to implement an abstract idea on a computer" is an example of when an abstract idea has not been integrated into a practical application. Additionally, the claims recite “an image forming device, the image forming device including: a display device configured to display a patient; and a lenticular lens having an array of lenses positioned in front of the display device to generate a three-dimensional rendering based on images displayed by the display device, wherein the lenticular lens enables viewing of different aspects of the patient from multiple viewing angles by directing different pixels of the display device to different viewing positions as the multiple viewing angles change relative to the display device”, “wherein the video feed is displayed in three dimensions by using the lenticular lens of the image forming device, wherein different aspects of the video feed of the patient environment are viewable through the lenticular lens as a viewing angle changes relative to the display device such that changing the viewing angle reveals different pixels of the display device behind the lenticular lens to provide depth perception” at a high degree of generality, amount no more than generally linking the abstract idea to a particular technical environment. The recitations are also similar to adding the words "apply it" to the abstract idea. As set forth in MPEP 2106.05(f) merely reciting the words "apply it" or an equivalent, is an example of when an abstract idea has not been integrated into a practical application. Claims 13, 15-18, 20: Step 2B The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of using a computer configured to perform above identified functions amounts to no more than mere instructions to apply the exception using generic computer components. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. See Alice 573 U.S. at 223 ("mere recitation of a generic computer cannot transform a patent-ineligible abstract idea into a patent-eligible invention.") Additionally, generally linking the abstract idea to a particular technological environment does not amount to significantly more than the abstract idea (See MPEP 2016.05(h) and Affinity Labs of Texas v. DirectTV, LLC, 838 F.3d 1253, 120 USP12d 1201 (Fed. Cir. 2016)). Therefore, whether considered alone or in combination, the additional elements do not amount to significantly more than the abstract idea. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-2, 4-6, 8-11 are rejected under 35 U.S.C. 103 as being unpatentable over Davidson (U.S. Patent Pre-Grant Publication No. 2022/0208367) in view of Krueger (U.S. Patent Pre-Grant Publication No. 2018/0008141) in further view of Freeman et al. (U.S. Patent Pre-Grant Publication No. 2019/0385342). As per independent claim 1, Davidson discloses a device for conducting a telehealth consultation, the device comprising: at least one processing device communicatively coupled to the image forming device (See Paragraphs [0032], [0040]: The controller of the information system may be included at least partially in a local server of the medical facility, a remote server, or a combination thereof, the controller includes a processor, a memory, and other control circuitry, which the Examiner is interpreting the controller to encompass at least one processing device, and the wearable device is able to communicate via the communication network (Paragraph [0040]), which the Examiner is interpreting to encompass communicatively coupled to the image forming device); and a memory device storing instructions which, when executed by the at least one processing device, cause the at least one processing device to: receive a video feed of a patient environment (See Fig. 2 and Paragraphs [0038]-[0039]: The data captured by the imagers 92, 98 generally includes image data, such as at least one of a picture, video, real-time streaming of data, other transmissions of image data, or combinations thereof, which the Examiner is interpreting the image data to encompass a video feed); recognize a local device in the video feed of the patient environment (See Paragraphs [0106]-[0109]: A visual identifier is operably coupled to the treatment device and a controller is configured to communicate with a remote device having a sensor for sensing the visual identifier within a field of detection, the controller is configured to recognize the visual identifier sensed by the remote device, determine device information associated with the visual identifier based on a configuration of the visual identifier, retrieve the device information relating to the treatment device associated with the visual identifier from an information source, and generate a virtual image including the device information configured to be viewed via the remote device, which the Examiner is interpreting a visual identifier is operably coupled to the treatment device to encompass a local device); display the video feed of the patient environment (See Paragraphs [0028]-[0029]: The virtual signage (e.g., the virtual image) is displayed using at least one of augmented reality and mixed reality), wherein the video feed is displayed in three dimensions by using the lenticular lens of the image forming device (See Fig. 8, 10 and Paragraphs [0029]-[0030], [0038]: The virtual signage (e.g., the virtual image) is displayed using at least one of augmented reality and mixed reality, which the Examiner is interpreting augmented reality or mixed reality to encompass displaying the video feed in three dimensions, when combined with Krueger as described below), wherein different aspects of the video feed of the patient environment are viewable through the lenticular lens as a viewing angle changes relative to the display device such that changing the viewing angle reveals different pixels of the display device behind the lenticular lens to provide depth perception; detect an input directed to the local device (See Paragraphs [0042]-[0044], [0077]-[0079]: The user sensors may include at least one gesture sensor to track position, movement, and gestures of the caregiver, and the movement and/or the focus of the caregiver may be analyzed by the control unit and/or the controller and compared to the virtual image to determine the selection or the input from the caregiver, which the Examiner is interpreting determine the section or the input from the caregiver to encompass detect an input directed to the local device); generate a control command based on the input (See Paragraphs [0077]-[0079]: The selection or interaction with the virtual image may be sensed by at least one of the environmental sensors and the user sensors and compared to the virtual image, the wearable device may then communicate the command or selection of the user to the medical bed); and send the control command to the local device, wherein the control command causes the local device to perform an action during the telehealth consultation (See Fig. 11 and Paragraphs [0077]-[0079]: The selectable features may activate certain protocols or adjust certain aspects of the medical bed, as illustrated in FIG. 11, the caregiver may select whether to raise the side rails of the medical bed, which the Examiner is interpreting the caregiver may select whether to raise the side rails of the medical bed to encompass perform an action.) While Davidson teaches the device to display the video feed of the patient environment, wherein the video feed is displayed in three dimensions as described above, Davidson may not explicitly teach an image forming device including: a display device configured to display a patient environment; and a lenticular lens having an array of lenses positioned in front of the display device to generate a three-dimensional rendering based on images displayed by the display device, wherein the lenticular lens enables viewing of different aspects of the patient environment from multiple viewing angles by directing different pixels of the display device to different viewing positions as the multiple viewing angles change relative to the display device; display the video feed of the patient environment, wherein the video feed is displayed in three dimensions by using the lenticular lens of the image forming device, wherein different aspects of the video feed of the patient environment are viewable through the lenticular lens as a viewing angle changes relative to the display device such that changing the viewing angle reveals different pixels of the display device behind the lenticular lens to provide depth perception. Kreuger teaches a device for an image forming device including: a display device configured to display a patient environment (See Paragraphs [0074]-[0079]: Realistic images can be used to simulate the target of interest or visual element being viewed or the environment in which the person would normal be engaged in when performing his or her activities of choice or occupation, which the Examiner is interpreting realistic images to encompass a display, VR, AR, or synthetic 3D devices to encompass a display device, and performing his or her activities of choice or occupation to encompass a patient environment); and a lenticular lens having an array of lenses positioned in front of the display device to generate a three-dimensional rendering based on images displayed by the display device (See Paragraphs [0008]-[0012]: A lenticular display that presents the images for the left eye and right eye in a unit that is not worn by the user, and the system of Kruger can select the display a lenticular display to be used (Claim 18)), wherein the lenticular lens enables viewing of different aspects of the patient environment from multiple viewing angles (See Paragraphs [0008]-[0012]: A lenticular display that presents the images for the left eye and right eye in a unit that is not worn by the user, instead the image for the left eye and the right eye are produced by a single device in a way that causes the left image to be projected at an angle visible to the left eye and causes the right image to be projected at an angle visible to the right eye, which the Examiner is interpreting to encompass the lenticular lens enables viewing of different aspects of the patient environment from multiple viewing angles as the viewing would change with the user’s change of eye position) by directing different pixels of the display device to different viewing positions as the multiple viewing angles change relative to the display device; display the video feed of the patient environment, wherein the video feed is displayed in three dimensions by using the lenticular lens of the image forming device (See Paragraphs [0008]-[0012]: A lenticular display that presents the images for the left eye and right eye in a unit that is not worn by the user, instead the image for the left eye and the right eye are produced by a single device in a way that causes the left image to be projected at an angle visible to the left eye and causes the right image to be projected at an angle visible to the right eye, which the Examiner is interpreting to encompass the different aspects of the video feed of the patient environment are viewable through the lenticular lens as a viewing angle changes relative to the display device), wherein different aspects of the video feed of the patient environment are viewable through the lenticular lens as a viewing angle changes relative to the display device such that changing the viewing angle reveals different pixels of the display device behind the lenticular lens to provide depth perception (See Paragraphs [0008], [0079], [0246]-[0256]: A multiple depth plane three dimensional (3D) display system can visually provide multiple virtual depth planes at respective radial focal distances to simulate a 4D light beam field, which the Examiner is interpreting VRD system also can show an image in each eye with an enough angle difference to simulate three-dimensional scenes with high fidelity to encompass different aspects of the video feed of the patient environment are viewable through the lenticular lenses, interpreting the multiple depth plane three dimensional (3D) display system to encompass changing the viewing angle reveals different pixels of the display device behind the lenticular lens to provide depth perception ([0252]).) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed to modify the device of Davidson to include an image forming device including: a display device configured to display a patient environment; and a lenticular lens having an array of lenses positioned in front of the display device to generate a three-dimensional rendering based on images displayed by the display device, wherein the lenticular lens enables viewing of different aspects of the patient environment from multiple viewing angles; display the video feed of the patient environment, wherein the video feed is displayed in three dimensions by using the lenticular lens of the image forming device, wherein different aspects of the video feed of the patient environment are viewable through the lenticular lens as a viewing angle changes relative to the display device such that changing the viewing angle reveals different pixels of the display device behind the lenticular lens to provide depth perception as taught by Krueger. One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to modify Davidson with Krueger with the motivation of provide unique complexity to the visual elements and to the background scenes (See Background of Krueger in Paragraph [0041]). While Davidson/Krueger discloses a device for a lenticular lens having an array of lenses positioned in front of the display device to generate a three-dimensional rendering based on images displayed by the display device, wherein the lenticular lens enables viewing of different aspects of the patient environment from multiple viewing angles, Davidson/Krueger may not explicitly teach directing different pixels of the display device to different viewing positions as the multiple viewing angles change relative to the display device. Freeman teaches a device a lenticular lens having an array of lenses positioned in front of the display device to generate a three-dimensional rendering based on images displayed by the display device, wherein the lenticular lens enables viewing of different aspects of the patient environment from multiple viewing angles by directing different pixels of the display device to different viewing positions as the multiple viewing angles change relative to the display device (See Paragraphs [0112], [0233]: One or more sensors may be utilized to find the location and distance of the user's eyes relative to the display unit such that the image may be displayed properly and to track the user's eye movement relative to the display using eye-tracking, and the pixels relating to the data portion of the image which is moved may be reduced to smaller pixels, such that the moved pixels and the pre-existing pixels occupy the same space on the display, which the Examiner is interpreting moved pixels to encompass directing different pixels of the display device to different viewing positions, and interpreting the tracking the user’s eye movement to encompass the multiple viewing angles change relative to the display device.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed to modify the device of Davidson/Krueger to include directing different pixels of the display device to different viewing positions as the multiple viewing angles change relative to the display device as taught by Freeman. One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to modify Davidson/Krueger with Freeman with the motivation of providing improvements in augmented reality (AR) glasses (See Background of the Invention of Freeman in Paragraph [0002]). As per claim 2, Davidson/Krueger/Freeman discloses the device of claim 1 as described above. Davidson further teaches wherein the memory device stores further instructions which, when executed by the at least one processing device (See Paragraph [0048]: Each treatment device includes a control unit that has a processor, a memory, and other control circuitry, instructions or routines are stored within the memory and executable by the processor), cause the at least one processing device to: perform a gesture recognition algorithm to recognize the input as a gesture (See Paragraphs [0044], [0060]: The user sensors may include at least one gesture sensor to track position, movement, and gestures of the caregiver, which may determine the interaction of the caregiver with the virtual image, which the Examiner is interpreting the gesture sensor to encompass a gesture recognition algorithm.) Claim(s) 11 mirrors claim 2 only within (a) different statutory category/categories, and is/are rejected for the same reason as claim 2. As per claim 4, Davidson/Krueger/Freeman discloses the device of claim 1 as described above. Davidson further teaches wherein the control command causes the local device to change a data display (See Paragraphs [0078]-[0080]: The virtual image may be adjusted or changed in response to the movement of the caregiver, which the Examiner is interpreting adjusting or changing the virtual image to encompass change a data display.) As per claim 5, Davidson/Krueger/Freeman discloses the device of claim 1 as described above. Davidson further teaches wherein the control command causes the local device to provide a therapy to the patient (See Paragraphs [0027], [0048]-[0052], [0068]-[0069]: Each room environment includes at least one treatment device for treating or otherwise caring for a patient, which the Examiner is interpreting treatment device for treating or otherwise caring for a patient to encompass the local device to provide a therapy to the patient.) As per claim 6, Davidson/Krueger/Freeman discloses the device of claim 1 as described above. Davidson further teaches wherein the memory device stores further instructions which, when executed by the at least one processing device, cause the at least one processing device to: augment the video feed to include a display of clinical data measured by the local device (See Paragraphs [0028]-[0029]: The information system for the medical facility disclosed herein utilizes the remote device to display virtual signage that may include the device information, the patient information, any additional information useful for the caregiver, or a combination thereof, which the Examiner is interpreting the patient information, any additional information useful for the caregiver to encompass clinical data measured by the local device.) Claim(s) 9 mirrors claim 6 only within (a) different statutory category/categories, and is/are rejected for the same reason as claim 6. As per independent claim 8, Davidson discloses a method of conducting a telehealth consultation, the method comprising: receiving a video feed of a patient environment (See Fig. 2 and Paragraphs [0038]-[0039]: The data captured by the imagers 92, 98 generally includes image data, such as at least one of a picture, video, real-time streaming of data, other transmissions of image data, or combinations thereof, which the Examiner is interpreting the image data to encompass a video feed); recognizing a local device in the video feed of the patient environment (See Paragraphs [0106]-[0109]: A visual identifier is operably coupled to the treatment device and a controller is configured to communicate with a remote device having a sensor for sensing the visual identifier within a field of detection, the controller is configured to recognize the visual identifier sensed by the remote device, determine device information associated with the visual identifier based on a configuration of the visual identifier, retrieve the device information relating to the treatment device associated with the visual identifier from an information source, and generate a virtual image including the device information configured to be viewed via the remote device, which the Examiner is interpreting a visual identifier is operably coupled to the treatment device to encompass a local device); detecting an input directed to the local device, the input detected from a remote caregiver watching the video feed (See Paragraphs [0042]-[0044], [0077]-[0079]: The user sensors may include at least one gesture sensor to track position, movement, and gestures of the caregiver, and the movement and/or the focus of the caregiver may be analyzed by the control unit and/or the controller and compared to the virtual image to determine the selection or the input from the caregiver, which the Examiner is interpreting determine the section or the input from the caregiver to encompass detect an input directed to the local device); generating a control command based on the input (See Paragraphs [0077]-[0079]: The selection or interaction with the virtual image may be sensed by at least one of the environmental sensors and the user sensors and compared to the virtual image, the wearable device may then communicate the command or selection of the user to the medical bed); sending the control command to the local device, wherein the control command causes the local device to perform an action during the telehealth consultation (See Fig. 11 and Paragraphs [0077]-[0079]: The selectable features may activate certain protocols or adjust certain aspects of the medical bed, as illustrated in FIG. 11, the caregiver may select whether to raise the side rails of the medical bed, which the Examiner is interpreting the caregiver may select whether to raise the side rails of the medical bed to encompass perform an action.) While Davidson taches the method as described above, Davidson may not explicitly teach displaying the video feed of the patient environment, wherein the video feed is displayed in three dimensions by an image forming device including a display device and a lenticular lens having an array of lenses positioned in front of the display device to generate a three- dimensional rendering based on images displayed by the display device by directing different pixels of the display device to different viewing positions as the multiple viewing angles change relative to the display device, wherein different aspects of the video feed of the patient environment are viewable through the lenticular lens as a viewing angle changes relative to the display device such that changing the viewing angle reveals different pixels of the display device behind the lenticular lens to provide depth perception. Krueger teaches a method for displaying the video feed of the patient environment, wherein the video feed is displayed in three dimensions by an image forming device including a display device (See Paragraphs [0074]-[0079]: Realistic images can be used to simulate the target of interest or visual element being viewed or the environment in which the person would normal be engaged in when performing his or her activities of choice or occupation, which the Examiner is interpreting realistic images to encompass a display, VR, AR, or synthetic 3D devices to encompass a display device, and performing his or her activities of choice or occupation to encompass a patient environment) and a lenticular lens having an array of lenses positioned in front of the display device to generate a three-dimensional rendering based on images displayed by the display device (See Paragraphs [0008]-[0012]: A lenticular display that presents the images for the left eye and right eye in a unit that is not worn by the user, and the system of Kruger can select the display a lenticular display to be used (Claim 18)) by directing different pixels of the display device to different viewing positions as the multiple viewing angles change relative to the display device, wherein different aspects of the video feed of the patient environment are viewable through the lenticular lens as a viewing angle changes relative to the display device (See Paragraphs [0008]-[0012]: A lenticular display that presents the images for the left eye and right eye in a unit that is not worn by the user, instead the image for the left eye and the right eye are produced by a single device in a way that causes the left image to be projected at an angle visible to the left eye and causes the right image to be projected at an angle visible to the right eye, which the Examiner is interpreting to encompass the different aspects of the video feed of the patient environment are viewable through the lenticular lens as a viewing angle changes relative to the display device) such that changing the viewing angle reveals different pixels of the display device behind the lenticular lens to provide depth perception (See Paragraphs [0008], [0079], [0246]-[0256]: A multiple depth plane three dimensional (3D) display system can visually provide multiple virtual depth planes at respective radial focal distances to simulate a 4D light beam field, which the Examiner is interpreting VRD system also can show an image in each eye with an enough angle difference to simulate three-dimensional scenes with high fidelity to encompass different aspects of the video feed of the patient environment are viewable through the lenticular lenses, interpreting the multiple depth plane three dimensional (3D) display system to encompass changing the viewing angle reveals different pixels of the display device behind the lenticular lens to provide depth perception ([0252]).) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed to modify the method of Davidson to include displaying the video feed of the patient environment, wherein the video feed is displayed in three dimensions by an image forming device including a display device and a lenticular lens having an array of lenses positioned in front of the display device to generate a three-dimensional rendering based on images displayed by the display device, wherein different aspects of the video feed of the patient environment are viewable through the lenticular lens as a viewing angle changes relative to the display device such that changing the viewing angle reveals different pixels of the display device behind the lenticular lens to provide depth perception as taught by Krueger. One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to modify Davidson with Krueger with the motivation of provide unique complexity to the visual elements and to the background scenes (See Background of Krueger in Paragraph [0041]). While Davidson/Krueger discloses a method for displaying the video feed of the patient environment, wherein the video feed is displayed in three dimensions by an image forming device including a display device and a lenticular lens having an array of lenses positioned in front of the display device to generate a three-dimensional rendering based on images displayed by the display device, wherein different aspects of the video feed of the patient environment are viewable through the lenticular lens as a viewing angle changes relative to the display device such that changing the viewing angle reveals different pixels of the display device behind the lenticular lens to provide depth perception, Davidson/Kruger may not explicitly teach by directing different pixels of the display device to different viewing positions as the multiple viewing angles change relative to the display device. Freeman teaches a method for by directing different pixels of the display device to different viewing positions as the multiple viewing angles change relative to the display device (See Paragraphs [0112], [0233]: One or more sensors may be utilized to find the location and distance of the user's eyes relative to the display unit such that the image may be displayed properly and to track the user's eye movement relative to the display using eye-tracking, and the pixels relating to the data portion of the image which is moved may be reduced to smaller pixels, such that the moved pixels and the pre-existing pixels occupy the same space on the display, which the Examiner is interpreting moved pixels to encompass directing different pixels of the display device to different viewing positions, and interpreting the tracking the user’s eye movement to encompass the multiple viewing angles change relative to the display device.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed to modify the method of Davidson/Krueger to include by directing different pixels of the display device to different viewing positions as the multiple viewing angles change relative to the display device as taught by Freeman. One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to modify Davidson/Krueger with Freeman with the motivation of providing improvements in augmented reality (AR) glasses (See Background of the Invention of Freeman in Paragraph [0002]). Claims 3, 12-13, 15-18, 20 are rejected under 35 U.S.C. 103 as being unpatentable over Davidson (U.S. Patent Pre-Grant Publication No. 2022/0208367) in view of Krueger (U.S. Patent Pre-Grant Publication No. 2018/0008141) in view of Freeman et al. (U.S. Patent Pre-Grant Publication No. 2019/0385342) in further view of Osterhout et al. (U.S. Patent Pre-Grant Publication No. 2012/0194551). As per claim 3, Davidson/Krueger/Freeman discloses the device of claim 1 as described above. Davidson further teaches wherein the memory device stores further instructions which, when executed by the at least one processing device, […] (See Paragraph [0048]: Each treatment device includes a control unit that has a processor, a memory, and other control circuitry, instructions or routines are stored within the memory and executable by the processor.) While Davidson/Krueger/Freeman teaches a device wherein the memory device stores further instructions which, when executed by the at least one processing device, Davidson/Krueger/Freeman may not explicitly teach perform a speech recognition algorithm to recognize the input as a voice command. Osterhout teaches a device wherein the memory device stores further instructions which, when executed by the at least one processing device, cause the at least one processing device to: perform a speech recognition algorithm to recognize the input as a voice command (See Paragraphs [0467], [0769]: User action capture inputs and/or devices may include a head tracking system, camera, voice recognition system, body movement sensor (e.g. kinetic sensor), eye-gaze detection system, tongue touch pad, sip-and-puff systems, joystick, cursor, mouse, touch screen, touch sensor, finger tracking devices, 3D/2D mouse, inertial movement tracking, microphone, wearable sensor sets, robotic motion detection system, optical motion tracking system, laser motion tracking system, keyboard, virtual keyboard, virtual keyboard on a physical platform, context determination system, activity determination system (e.g. on a train, on a plane, walking, exercising, etc.) finger following camera, virtualized in-hand display, sign language system, trackball, hand-mounted camera, temple-located sensors, glasses- located sensors, Bluetooth communications, wireless communications, satellite communications, and the like, and combinations of the same.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed to modify the device of Davidson/Krueger/Freeman to include perform a speech recognition algorithm to recognize the input as a voice command as taught by Osterhout. One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to modify Davidson/Krueger/Freeman with Osterhout with the motivation of improving performance with voice commands (See Detailed Description of Osterhout in Paragraph [0811]). Claim(s) 12 mirrors claim 3 only within (a) different statutory category/categories, and is/are rejected for the same reason as claim 3. As per independent claim 13, Davidson discloses a device for conducting a telehealth consultation, the device comprising: at least one processing device communicatively coupled to the image forming device (See Paragraphs [0032], [0040]: The controller of the information system may be included at least partially in a local server of the medical facility, a remote server, or a combination thereof, the controller includes a processor, a memory, and other control circuitry, which the Examiner is interpreting the controller to encompass at least one processing device, and the wearable device is able to communicate via the communication network (Paragraph [0040]), which the Examiner is interpreting to encompass communicatively coupled to the image forming device); and a memory device storing instructions which, when executed by the at least one processing device, cause the at least one processing device to: receive a video feed of the patient (See Paragraphs [0032], [0040]: The controller of the information system may be included at least partially in a local server of the medical facility, a remote server, or a combination thereof, the controller includes a processor, a memory, and other control circuitry, which the Examiner is interpreting the controller to encompass at least one processing device, and the wearable device is able to communicate via the communication network (Paragraph [0040]), which the Examiner is interpreting to encompass communicatively coupled to the image forming device when combined with Krueger); and display the feature and the measurement of the feature in the three-dimensional image (See Paragraphs [0028]-[0029]: The information system for the medical facility disclosed herein utilizes the remote device to display virtual signage that may include the device information, the patient information, any additional information useful for the caregiver, or a combination thereof, which the Examiner is interpreting the patient information, any additional information useful for the caregiver to encompass the feature and the measurement of the feature in the three-dimensional image when combined with Osterhout.) While Davidson teaches the device as described above, Davidson may not explicitly teach an image forming device, the image forming device including: a display device configured to display a patient; and a lenticular lens having an array of lenses positioned in front of the display device to generate a three-dimensional rendering based on images displayed by the display device, wherein the lenticular lens enables viewing of different aspects of the patient from multiple viewing angles by directing different pixels of the display device to different viewing positions as the multiple viewing angles change relative to the display device; display a three-dimensional image using the image forming device, the three-dimensional image being based on the video feed, wherein different aspects of the patient are viewable through the lenticular lens as a viewing angle changes relative to the display device such that the viewing angle reveals different pixels of the display device behind the lenticular lens to provide depth perception. Kruger teaches a device for an image forming device, the image forming device including: a display device configured to display a patient (See Paragraphs [0074]-[0079]: Realistic images can be used to simulate the target of interest or visual element being viewed or the environment in which the person would normal be engaged in when performing his or her activities of choice or occupation, which the Examiner is interpreting realistic images to encompass a display, VR, AR, or synthetic 3D devices to encompass a display device, and performing his or her activities of choice or occupation to encompass a patient environment); and a lenticular lens having an array of lenses positioned in front of the display device to generate a three-dimensional rendering based on images displayed by the display device (See Paragraphs [0008]-[0012]: A lenticular display that presents the images for the left eye and right eye in a unit that is not worn by the user, and the system of Kruger can select the display a lenticular display to be used (Claim 18)), wherein the lenticular lens enables viewing of different aspects of the patient from multiple viewing angles (See Paragraphs [0008]-[0012]: A lenticular display that presents the images for the left eye and right eye in a unit that is not worn by the user, instead the image for the left eye and the right eye are produced by a single device in a way that causes the left image to be projected at an angle visible to the left eye and causes the right image to be projected at an angle visible to the right eye, which the Examiner is interpreting to encompass the lenticular lens enables viewing of different aspects of the patient environment from multiple viewing angles as the viewing would change with the user’s change of eye position) by directing different pixels of the display device to different viewing positions as the multiple viewing angles change relative to the display device; display a three-dimensional image using the image forming device, the three-dimensional image being based on the video feed (See Paragraphs [0008]-[0012]: A lenticular display that presents the images for the left eye and right eye in a unit that is not worn by the user, instead the image for the left eye and the right eye are produced by a single device in a way that causes the left image to be projected at an angle visible to the left eye and causes the right image to be projected at an angle visible to the right eye, which the Examiner is interpreting to encompass the different aspects of the video feed of the patient environment are viewable through the lenticular lens as a viewing angle changes relative to the display device), wherein different aspects of the patient are viewable through the lenticular lens as a viewing angle changes relative to the display device such that the viewing angle reveals different pixels of the display device behind the lenticular lens to provide depth perception (See Paragraphs [0008], [0079], [0246]-[0256]: A multiple depth plane three dimensional (3D) display system can visually provide multiple virtual depth planes at respective radial focal distances to simulate a 4D light beam field, which the Examiner is interpreting VRD system also can show an image in each eye with an enough angle difference to simulate three-dimensional scenes with high fidelity to encompass different aspects of the video feed of the patient environment are viewable through the lenticular lenses, interpreting the multiple depth plane three dimensional (3D) display system to encompass changing the viewing angle reveals different pixels of the display device behind the lenticular lens to provide depth perception ([0252]).) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed to modify the device of Davidson to include an image forming device, the image forming device including: a display device configured to display a patient; and a lenticular lens having an array of lenses positioned in front of the display device to generate a three-dimensional rendering based on images displayed by the display device, wherein the lenticular lens enables viewing of different aspects of the patient from multiple viewing angles; display a three-dimensional image using the image forming device, the three-dimensional image being based on the video feed, wherein different aspects of the patient are viewable through the lenticular lens as a viewing angle changes relative to the display device such that the viewing angle reveals different pixels of the display device behind the lenticular lens to provide depth perception as taught by Krueger. One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to modify Davidson with Krueger with the motivation of provide unique complexity to the visual elements and to the background scenes (See Background of Krueger in Paragraph [0041]). While Davidson/Krueger discloses a device for a lenticular lens having an array of lenses positioned in front of the display device to generate a three-dimensional rendering based on images displayed by the display device, wherein the lenticular lens enables viewing of different aspects of the patient from multiple viewing angles, Davidson/Kruger may not explicitly teach by directing different pixels of the display device to different viewing positions as the multiple viewing angles change relative to the display device. Freeman teaches a device for a lenticular lens having an array of lenses positioned in front of the display device to generate a three-dimensional rendering based on images displayed by the display device, wherein the lenticular lens enables viewing of different aspects of the patient from multiple viewing angles by directing different pixels of the display device to different viewing positions as the multiple viewing angles change relative to the display device (See Paragraphs [0112], [0233]: One or more sensors may be utilized to find the location and distance of the user's eyes relative to the display unit such that the image may be displayed properly and to track the user's eye movement relative to the display using eye-tracking, and the pixels relating to the data portion of the image which is moved may be reduced to smaller pixels, such that the moved pixels and the pre-existing pixels occupy the same space on the display, which the Examiner is interpreting moved pixels to encompass directing different pixels of the display device to different viewing positions, and interpreting the tracking the user’s eye movement to encompass the multiple viewing angles change relative to the display device.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed to modify the device of Davidson/Krueger to include by directing different pixels of the display device to different viewing positions as the multiple viewing angles change relative to the display device as taught by Freeman. One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to modify Davidson/Krueger with Freeman with the motivation of providing improvements in augmented reality (AR) glasses (See Background of the Invention of Freeman in Paragraph [0002]). While Davidson/Krueger/Freeman teaches the device as described above, Davidson/Krueger/Freeman may not explicitly teach determine a measurement of a feature in the three-dimensional image, wherein the feature is a wound and the measurement is a size of the wound. Osterhout teaches a device for determine a measurement of a feature in the three-dimensional image (See Paragraphs [0444]-[0446]: The eyepiece may be used periodically to measure the gait of the user, and maintain the measurements in a database for analysis, which the Examiner is interpreting the measured gait to encompass a measurement of a feature in the three-dimensional image), wherein the feature is a wound and the measurement is a size of the wound (See Paragraphs [0838]-[0843]: A gunshot wound to the chest of a soldier is the example used in Osterhout to supply guidance to a caregiver.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed to modify the device of Davidson/Krueger/Freeman to include determine a measurement of a feature in the three-dimensional image, wherein the feature is a wound and the measurement is a size of the wound as taught by Osterhout. One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to modify Davidson/Krueger/Freeman with Osterhout with the motivation of improving use and associated efficiency of the eyepiece (See Detailed Description of Osterhout in Paragraph [0784]). As per claim 15, Davidson/Krueger/Freeman/Osterhout discloses the device of claim 13 as described above. Davidson further teaches further comprising: an image capture device configured to capture an image of a user (See Paragraphs [0037]-[0039], [0056]: The imager captures data from the field of detection, while the imager captures image data within a separate field of detection, and the imager may be configured to capture image data of the face of the caregiver for identification and authorization purposes.) As per claim 16, Davidson/Krueger/Freeman/Osterhout discloses the device of claims 13 and 15 as described above. Davidson/Kruger/Freeman may not explicitly teach further comprising: a mirror covering the image capture device, the mirror allowing the image capture device to capture images without disturbing the three-dimensional image. Osterhout teaches a device further comprising: a mirror covering the image capture device, the mirror allowing the image capture device to capture images without disturbing the three-dimensional image (See Fig. 72, 88 and Paragraphs [0230]-[0234]: Certain optical elements of the transfer optics may replace the outer lens of an eyewear application, in an example, a beam splitter, lens, or mirror of the transfer optics could replace the front lens for an eyewear application (e.g. sunglasses), thus eliminating the need for the front lens of the glasses, such as if the curved reflection mirror is extended to cover the glasses, eliminating the need for the cover lens, which the Examiner is interpreting the curved reflection mirror to encompass a mirror covering the image capture device.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed to modify the device of Davidson/Kruger/Freeman to include a mirror covering the image capture device, the mirror allowing the image capture device to capture images without disturbing the three-dimensional image as taught by Osterhout. One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to modify Davidson/Kruger/Freeman with Osterhout with the motivation of improving use and associated efficiency of the eyepiece (See Detailed Description of Osterhout in Paragraph [0784]). As per claim 17, Davidson/Kruger/Freeman/Osterhout discloses the device of claim 13 as described above. Davidson further teaches wherein the video feed includes red, green, blue, and depth (RGB-D) data, and the three-dimensional image is generated using the RGB-D data (See Paragraphs [0041]-[0042]: The environmental sensors may include infrared cameras or Light Detection and Ranging (LIDAR) emitters and detectors to capture depth or range in the surrounding environment and multiple sensors, such as an infrared sensor or Red, Green, Blue (RGB) cameras that sense information about the movement of the user, such as the position, orientation, and motion of the user within the environment.) As per claim 18, Davidson/Kruger/Freeman/Osterhout discloses the device of claims 13 and 17 as described above. Davidson further teaches wherein the measurement is determined based on the RGB-D data (See Paragraphs [0041]-[0042]: The environmental sensors may include infrared cameras or Light Detection and Ranging (LIDAR) emitters and detectors to capture depth or range in the surrounding environment and multiple sensors, such as an infrared sensor or Red, Green, Blue (RGB) cameras that sense information about the movement of the user, such as the position, orientation, and motion of the user within the environment.) As per claim 20, Davidson/Kruger/Freeman/Osterhout discloses the device of claim 13 as described above. Davidson further teaches wherein the memory device stores further instructions which, when executed by the at least one processing device (See Paragraph [0048]: Each treatment device includes a control unit that has a processor, a memory, and other control circuitry, instructions or routines are stored within the memory and executable by the processor), cause the at least one processing device to: recognize an object in the three-dimensional image (See Paragraphs [0106]-[0109]: A visual identifier is operably coupled to the treatment device and a controller is configured to communicate with a remote device having a sensor for sensing the visual identifier within a field of detection, the controller is configured to recognize the visual identifier sensed by the remote device, determine device information associated with the visual identifier based on a configuration of the visual identifier, retrieve the device information relating to the treatment device associated with the visual identifier from an information source, and generate a virtual image including the device information configured to be viewed via the remote device, which the Examiner is interpreting a visual identifier is operably coupled to the treatment device to encompass recognize an object in the three-dimensional image); augment the three- dimensional image to include a display of clinical data associated with the object recognized in the three-dimensional image (See Paragraphs [0028]-[0029]: The information system for the medical facility disclosed herein utilizes the remote device to display virtual signage that may include the device information, the patient information, any additional information useful for the caregiver, or a combination thereof, which the Examiner is interpreting the patient information, any additional information useful for the caregiver to encompass clinical data associated with the object.) Response to Arguments In the Remarks filed on October 27, 2025, the Applicant argues that the newly amended and/or added claims overcome the Claim Objection(s), 35 U.S.C. 101 rejection(s), and 35 U.S.C. 103 rejection(s). The Examiner acknowledges that the newly added and/or amended claims overcome the Claim Objection(s). However, the Examiner does not acknowledge that the newly added and/or amended claims overcome the 35 U.S.C. 101 rejection(s) and 35 U.S.C. 103 rejection(s). The Applicant argues that: (1) claim 1, which is representative, is patent eligible under Step 2A, Prong Two because the claim recites elements that integrate the alleged judicial exception into a practical application. The claim now recites specific structural limitations and technical functionality that meaningfully limit the scope of any abstract concept and provide a concrete technological solution to a real-world problem in telehealth consultations. The assertion in the action that the lenticular lens is recited at a high degree of generality and amounts to generally linking the abstract idea to a particular technical environment is no longer applicable. Action at 7. The amended claim now provides specific technical details about how the lenticular lens operates by directing different pixels of the display device to different viewing positions as the multiple viewing angles change relative to the display device and how changing the viewing angle reveals different pixels of the display device behind the lenticular lens to provide depth perception. This level of technical specificity goes well beyond a generic recitation and describes a particular technical implementation that meaningfully constrains the claim scope. The amendments to claim 1 provide an improvement to technology by solving a specific technological problem in the field of telehealth displays, including displaying a video feed of a patient environment in three dimensions without requiring specialized eyewear accessories such as goggles or eyeglasses. Specification at paragraph 72. The specific technical solution involves directing different pixels of the display device to different viewing positions as the multiple viewing angles change relative to the display device such that changing the viewing angle reveals different pixels of the display device behind the lenticular lens to provide depth perception. This represents a technological advancement over traditional display systems that require cumbersome and expensive hardware for three-dimensional viewing. The amendments specify concrete structural elements and their interactions that go beyond generic computer components and cannot be performed by human mental activity. The action states that nothing in the claim precludes the limitations from practically being performed by a person following rules or instructions to conduct a telehealth consultation. Action at 5. However, the specific limitation requiring directing different pixels of the display device to different viewing positions as the multiple viewing angles change relative to the display device such that changing the viewing angle reveals different pixels of the display device behind the lenticular lens to provide depth perception cannot practically be performed in the human mind or by a human using pen and paper. This optical pixel-directing functionality through a lenticular lens array requires specific hardware components and cannot be accomplished through human mental processes alone; (2) the amendments to claim 1 also provide an inventive concept that amounts to significantly more than the alleged judicial exception under Step 2B. Claim 1 now recites a non-conventional and non-generic arrangement of elements that provides novel functionality through the specific technical implementation of directing different pixels of the display device to different viewing positions as the multiple viewing angles change relative to the display device such that changing the viewing angle reveals different pixels of the display device behind the lenticular lens to provide depth perception. This arrangement is not well-understood, routine, or conventional in medical information display systems, as the specific pixel-directing functionality through a lenticular lens array represents a technical innovation that goes beyond generic computer implementation. Action at 7-8. The combination of elements in amended claim 1 provides a technical solution for displaying three-dimensional medical information without specialized eyewear, which constitutes an inventive concept that is significantly more than the alleged abstract idea. Specification at paragraph 72. The ordered combination of the lenticular lens array with the specific pixel-directing mechanism creates a technical effect that enables depth perception through viewing angle changes, providing functionality that cannot be achieved by the individual components alone. This technical solution addresses the specific problem of providing three-dimensional visualization in telehealth consultations without requiring cumbersome hardware accessories, representing an inventive concept that transforms the claim into patent-eligible subject matter under Step 2B of the Alice/Mayo framework. Therefore, for at least the foregoing reasons, amended claim 1 is patent eligible under 35 U.S.C. §101. Independent claims 8 and 13 are amended to include similar features and are therefore patent eligible under 35 U.S.C. § 101 for similar reasons. The remaining claims, which depend from claims 1, 8, and 13, are also patent eligible for similar reasons. Accordingly, Applicant respectfully requests that the rejections under 35 U.S.C. § 101 be withdrawn; (3) Davidson teaches augmented reality and virtual signage display systems but does not disclose three-dimensional display capabilities through lenticular lens technology. Action at 13-14. The Office Action instead cites Krueger for teaching a lenticular display. While Krueger does disclose certain lenticular display functionality, the amended claim language provides a novel technical implementation that is not disclosed or suggested by the combination of Davidson and Krueger. The amended claim recites specific functionality that goes beyond Krueger's basic lenticular display teaching. While Krueger teaches a lenticular display that presents the images for the left eye and right eye in a unit that is not worn by the user, and that the left image to be projected at an angle visible to the left eye and causes the right image to be projected at an angle visible to the right eye, this teaching is limited to stereoscopic left-right eye image presentation. In contrast, the amended claim specifies that the lenticular lens enables viewing of different aspects of the patient environment from multiple viewing angles by directing different pixels of the display device to different viewing positions as the multiple viewing angles change relative to the display device and that changing the viewing angle reveals different pixels of the display device behind the lenticular lens to provide depth perception. Krueger's disclosure is fundamentally different because it is limited to a binary left-eye/right-eye stereoscopic system, whereas the claimed invention provides a continuous range of multiple viewing angles with corresponding pixel-specific direction that changes dynamically as the viewing angle changes; (4) the claimed functionality provides a distinctly different technical approach from Krueger's stereoscopic system. Krueger's lenticular display creates depth perception through binocular disparity by presenting separate images to each eye simultaneously. Krueger at paragraph 11. The claimed system instead provides depth perception through a single display that reveals different pixels behind the lenticular lens as the user physically changes their viewing angle relative to the display device. This angle-dependent pixel revelation mechanism enables a user to see different aspects of the same scene by moving their position, which is not suggested by Krueger's fixed left-eye/right-eye image presentation system. The combination of Davidson and Krueger fails to suggest the claimed dynamic pixel-directing functionality. The references fail to suggest the claimed system where changing the viewing angle reveals different pixels of the display device behind the lenticular lens to provide depth perception. The claimed system provides a continuous spectrum of viewing angles with corresponding pixel direction changes based on user movement, which is fundamentally different from Krueger's dual-image stereoscopic approach and is not suggested by Davidson's augmented reality displays. The claimed functionality enables a user to see different aspects of a patient environment by physically changing position, providing a form of depth perception and perspective viewing that is distinct from stereoscopic depth perception. There is no suggestion in the cited references that would lead one of ordinary skill in the art to modify Krueger's fixed dual-image system to provide dynamic pixel revelation based on continuous viewing angle changes; (5) based upon the technical differences between Davidson and Krueger, a person of ordinary skill in the art would not have been motivated to modify Davidson's eyewear system with Krueger's lenticular display technology because such a modification would be technically incompatible and nonsensical. Davidson discloses a wearable device configured as glasses or another head-mounted display that projects virtual images into the field of view of the caregiver. The system specifically relies on tracking the head and eye position of the caregiver to adjust virtual images relative to the caregiver as a real-world object would, whereas the caregiver moves closer to the virtual image, the virtual image appears larger as if the caregiver is approaching a real object. Davidson at paragraph 84. This eyewear-based system is fundamentally designed around the concept that the display moves with the user's head and maintains a fixed spatial relationship to the user's eyes. In contrast, Krueger teaches a lenticular display system specifically designed for stationary viewing where the viewing angle changes relative to a fixed display device. A lenticular display presents the images for the left eye and right eye in a unit that is not worn by the user, instead the image for the left eye and the right eye are produced by a single device in a way that causes the left image to be projected at an angle visible to the left eye and causes the right image to be projected at an angle visible to the right eye. Krueger at paragraph [0008]. The functionality of lenticular displays depends entirely on the viewer's ability to move relative to the display to see different aspects of the image from multiple viewing angles. The technical incompatibility becomes apparent when considering that, in Davidson's eyewear system, the viewing angle of the user's eyes relative to the display does not change because the eyewear is physically attached to the user's head. A person of ordinary skill would recognize that incorporating a lenticular lens into head-mounted eyewear would serve no functional purpose since the core benefit of lenticular technology - providing different visual perspectives as viewing angles change - cannot be realized when the display is fixed relative to the viewer's eyes. Furthermore, the amended claim language specifically requires that different aspects of the video feed of the patient environment are viewable through the lenticular lens as a viewing angle changes relative to the display device such that changing the viewing angle reveals different pixels of the display device behind the lenticular lens to provide depth perception. This functionality is impossible to achieve in Davidson's head-mounted system, where the display device maintains a constant spatial relationship to the viewer's eyes. A skilled artisan would understand that modifying Davidson's eyewear to include Krueger's lenticular display would eliminate the very functionality that makes lenticular displays useful, rendering the modification technically pointless and contrary to sound engineering principles. Therefore, the claimed functionality of providing depth perception through changing the viewing angle reveals different pixels of the display device behind the lenticular lens represents a specific technical solution that is not obvious from the cited references. This dynamic pixel-revelation mechanism for depth perception differs from Krueger's stereoscopic approach and provides a novel method for three-dimensional visualization without specialized eyewear. The prior art references, individually or in combination, do not suggest this specific technical implementation of directing pixels to viewing positions based on continuous angle changes to achieve depth perception through pixel revelation behind the lenticular lens array. Therefore, for at least the foregoing reasons, amended claim 1 is patentable over the cited references of record, and withdrawal of the rejection of claim 1, and of the claims that depend therefrom, is respectfully requested; (6) independent claim 8 is directed to a method of conducting a telehealth consultation. While the scope of claim 8 is distinct from that of claim 1, claim 8 is patentable over the cited art for at least similar reasons as claim 1, and withdrawal of the rejection of claim 8 and of the claims that depend therefrom is respectfully requested. Claims 3, 12-13, 15-18, and 20 are rejected under 35 U.S.C. § 103 as being unpatentable over Davidson in view of Krueger, and in further view of Osterhout (US 2012/0194551). Applicant does not accede to the characterizations of the cited references or pending claims. Claim 3 depends from independent claim 1, and claim 12 depends from independent claim 8. Osterhout, which relates to an interactive head-mounted eyepiece, does not supply what is missing from the cited art. Therefore, claims 3 and 12 are patentable over the cited references of record for at least similar reasons as independent claims 1 and 8. Independent claim 13, while distinct in scope from that of claim 1, is patentable over the cited art for at least similar reasons as claim 1, and withdrawal of the rejection of claim 13 and of the claims that depend therefrom is respectfully requested. In response to argument (1), the Examiner does not find the Applicant’s argument(s) persuasive. The Examiner maintains that the Applicant’s newly amended claims encompass managing personal behavior or relationships between people (including social activities, teaching, and following rules or instructions) which is a subgrouping of Certain Methods of Organizing Human Activity. The additional elements “wherein the lenticular lens enables viewing of different aspects of the patient environment from multiple viewing angles by directing different pixels of the display device to different viewing positions as the multiple viewing angles change relative to the display device;”, “wherein the video feed is displayed in three dimensions by using the lenticular lens of the image forming device, wherein different aspects of the video feed of the patient environment are viewable through the lenticular lens as a viewing angle changes relative to the display device such that changing the viewing angle reveals different pixels of the display device behind the lenticular lens to provide depth perception” are recited at a high degree of generality, amount no more than generally linking the abstract idea to a particular technical environment. The recitations are also similar to adding the words "apply it" to the abstract idea. As set forth in MPEP 2106.05(f) merely reciting the words "apply it" or an equivalent, is an example of when an abstract idea has not been integrated into a practical application. The Examiner maintains that the Applicant’s claims are similar to “iv. Recording, transmitting, and archiving digital images by use of conventional or generic technology in a nascent but well-known environment, without any assertion that the invention reflects an inventive solution to any problem presented by combining a camera and a cellular telephone, TLI Communications, 823 F.3d at 611-12, 118 USPQ2d at 1747” (See MPEP 2106.05(a)(I)) and the courts have indicated may not be sufficient to show an improvement in computer-functionality. The Examiner maintains that the elaboration of the use of the lenticular lens amounts to no more than generally linking the abstract idea to a particular technical environment. The 35 U.S.C. 101 rejection(s) stand. In response to argument (2), the Examiner does not find the Applicant’s argument(s) persuasive. The Examiner maintains that the Applicant’s claims are similar to “iv. Recording, transmitting, and archiving digital images by use of conventional or generic technology in a nascent but well-known environment, without any assertion that the invention reflects an inventive solution to any problem presented by combining a camera and a cellular telephone, TLI Communications, 823 F.3d at 611-12, 118 USPQ2d at 1747” (See MPEP 2106.05(a)(I)) and the courts have indicated may not be sufficient to show an improvement in computer-functionality. The Examiner maintains that the elaboration of the use of the lenticular lens amounts to no more than generally linking the abstract idea to a particular technical environment. The Examiner maintains that the newly amended claimed portions recite an abstract idea without significantly more. The 35 U.S.C. 101 rejection(s) stand. In response to argument (3), the Examiner finds the Applicant’s argument(s) persuasive. The Examiner has supplemented the combination of Davidson (U.S. Patent Pre-Grant Publication No. 2022/0208367) and Krueger (U.S. Patent Pre-Grant Publication No. 2018/0008141) with Freeman et al. (U.S. Patent Pre-Grant Publication No. 2019/0385342). The Examiner has relied on Freeman to teach “by directing different pixels of the display device to different viewing positions as the multiple viewing angles change relative to the display device” as described above in the 35 U.S.C. 103 rejection(s). The 35 U.S.C. 103 rejection(s) stand. In response to argument (4), the Examiner does not find the Applicant’s argument(s) persuasive. The Examiner maintains that Kruger’s teachings in Paragraphs [0008], [0079], [0246]-[0256] that a multiple depth plane three dimensional (3D) display system can visually provide multiple virtual depth planes at respective radial focal distances to simulate a 4D light beam field encompasses the newly amended claimed portion of “wherein different aspects of the video feed of the patient environment are viewable through the lenticular lens as a viewing angle changes relative to the display device such that changing the viewing angle reveals different pixels of the display device behind the lenticular lens to provide depth perception” as in Paragraph [0080] Krueger teaches “ocular testing can be performed in a mode where the object is static and the person moves the head in a horizontal or vertical manner, or the object can be dynamically changing in size, position, or other features, while the person is rotating the head” which the Examiner is interpreting to encompass the Applicant’s newly amended claimed portions as the object can dynamically change in size, position, or other features. The 35 U.S.C. 103 rejection(s) stand. In response to argument (5), the Examiner does not find the Applicant’s argument(s) persuasive. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed to modify the device of Davidson with Krueger with the motivation of provide unique complexity to the visual elements and to the background scenes (See Background of Krueger in Paragraph [0041]). Davidson describes that the present disclosure generally relates to virtual signage, and more particularly to virtual signage at a medical facility utilizing augmented reality or mixed reality ([0002]), and Kruger describes an invention that relates to systems and methods that use virtual reality, augmented reality, and/or a synthetic computer-generated 3-dimensional information (VR/AR/synthetic 3D) for the measurement of human ocular performance. Both Davidson and Krueger are focused in the field of augmented reality and/or mixed reality, the Examiner maintains that the modification would be obvious to one of ordinary skill in the art. Further, the Examiner would like to point to the wearable device of both Davidson (Paragraph [0041], Fig. 10) and Krueger (Paragraph [0224], Figs. 2-4, 19). The Examiner maintains that it would have been obvious to one of ordinary art to integrate lenticular lenses into a wearable device as Bristol et al. (U.S. Patent No. 10,578,875) recites in col. 6, ll. 8-20: “Lenses 302(a)-(b) generally represent any type or form of transmissive optical element or device that is capable of converging (i.e., focusing) and/or diverging (i.e., dispersing) light. Examples of such lenses include, without limitation, simple lenses, compound lenses (e.g., so-called doublet or triplet lenses), Fresnel lenses, liquid lenses, lenticular lenses, etc. These lenses may have a variety of properties, including accommodative properties (i.e., the ability to adjust optical power) and/or adaptive properties (i.e., the ability to control, compensate, or correct for wavefront errors such as distortion and aberrations). In some examples, a user may look through lenses 302(a)-(b) to view computer-generated imagery presented on display 330.” The 35 U.S.C. 103 rejection(s) stand. In response to argument (6), the Examiner does not find the Applicant’s argument(s) persuasive. The Examiner maintains that claims 3-6, 8-9, 11-13, 15-18, 20 are rejected under 35 U.S.C. 103. The 35 U.S.C. 103 rejection(s) stand. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Bristol et al. (U.S. Patent No. 10,578,875), describes a head-mounted display system may include (1) a display for displaying computer-generated imagery, (2) a lens, (3) a peripheral wall extending from a back end to a front end, with the back end coupled to the lens and the front end coupled to the display such that the lens, the peripheral wall, and the display together define an enclosure, and (4) a speaker housed by the enclosure. THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Bennett S Erickson whose telephone number is (571)270-3690. The examiner can normally be reached Monday - Friday: 9:00am - 5:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Robert Morgan can be reached at (571) 272-6773. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Bennett Stephen Erickson/Primary Examiner, Art Unit 3683
Read full office action

Prosecution Timeline

Mar 17, 2023
Application Filed
Nov 14, 2024
Non-Final Rejection — §101, §103
Dec 30, 2024
Response Filed
Mar 12, 2025
Final Rejection — §101, §103
May 09, 2025
Response after Non-Final Action
Jun 16, 2025
Request for Continued Examination
Jun 24, 2025
Response after Non-Final Action
Jul 24, 2025
Non-Final Rejection — §101, §103
Oct 27, 2025
Response Filed
Jan 30, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597518
INCORPORATING CLINICAL AND ECONOMIC OBJECTIVES FOR MEDICAL AI DEPLOYMENT IN CLINICAL DECISION MAKING
2y 5m to grant Granted Apr 07, 2026
Patent 12580069
AUTOMATIC SETTING OF IMAGING PARAMETERS
2y 5m to grant Granted Mar 17, 2026
Patent 12580061
System and Method for Virtual Verification in Pharmacy Workflow
2y 5m to grant Granted Mar 17, 2026
Patent 12567501
STABILITY ESTIMATION OF A POINT SET REGISTRATION
2y 5m to grant Granted Mar 03, 2026
Patent 12499978
METHODS, SYSTEMS, AND DEVICES FOR DETERMINING MUTLI-PARTY COLOCATION
2y 5m to grant Granted Dec 16, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
38%
Grant Probability
84%
With Interview (+45.9%)
3y 7m
Median Time to Grant
High
PTA Risk
Based on 141 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month