Prosecution Insights
Last updated: April 19, 2026
Application No. 17/886,392

TECHNIQUES FOR THREE-DIMENSIONAL ANALYSIS OF SPACES

Non-Final OA §103§112
Filed
Aug 11, 2022
Examiner
ZHAO, CHRISTINE NMN
Art Unit
2677
Tech Center
2600 — Communications
Assignee
Hill-Rom Services, Inc.
OA Round
3 (Non-Final)
61%
Grant Probability
Moderate
3-4
OA Rounds
3y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 61% of resolved cases
61%
Career Allow Rate
11 granted / 18 resolved
-0.9% vs TC avg
Strong +58% interview lift
Without
With
+58.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
19 currently pending
Career history
37
Total Applications
across all art units

Statute-Specific Performance

§101
11.5%
-28.5% vs TC avg
§103
58.2%
+18.2% vs TC avg
§102
8.2%
-31.8% vs TC avg
§112
16.4%
-23.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 18 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on December 18, 2025 has been entered. Claim Objections Claim 29 is objected to because of the following informalities: In claim 29 line 5, “the angle” should read “the temperature” Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 10 and 26-27 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Regarding claim 10, paragraph [0049] of the specification discloses “the imaging system 134 analyzes a first image of the room 102, generates the control signal(s) based on the analysis, and identifies a second image of the room 102 once the control signal(s) have caused an adjustment of the camera 104, the first optical instrument 124, the second optical instrument 126, or any combination thereof. This process can be repeated for the purpose of monitoring aspects of the room 102, such as tracking subjects in the room”. However, there is no mention of combining the first image and the second image. Thus, the written description fails to support the limitation “the 3D image is generated by combining, with the processor, the first 2D image and the second 2D image” in claim 10. Regarding claims 26-27, paragraph [0078] of the specification discloses “The imaging system 608 may infer that the subject has moved closer to, or farther away, from the camera 602 by determining that the object representing the subject has changed size in consecutive images”. Paragraph [0095] further discloses “The imaging system 608 may determine a distance between each fiducial marker 610 and the camera 602 based on the image. For example, the fiducial markers 610 have a relatively small size in the image if they are located relatively far from the camera 602, and may have a relatively large size in the image if they are located relatively close to the camera 602”. However, there is no mention of determining a distance between the subject and the optical instrument. Thus, the written description fails to support the limitation “determining a distance between the subject and the optical instrument based on the difference” in claims 26-27. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 1-7, 12-13, 16-17, 21 and 26-27 are rejected under 35 U.S.C. 103 as being unpatentable over Lv et al. (US 2014/0184745) in view of Wells et al. (US 10996481) and in further view of Xu (US 2022/0292796). Regarding claim 1, Lv discloses an imaging system, comprising: an optical camera configured to capture a two-dimensional (2D) image of a three-dimensional (3D) space (paragraph 0029: “a single camera 110, which may be an infrared (IR) camera”); an optical instrument disposed in the 3D space and configured to refract and/or reflect light (FIGs. 1A-B, paragraphs 0029-0030: “a mirror 140” with “a field of reflection”); a processor communicatively coupled to the optical camera (paragraph 0009: “A processor coupled with the camera”); and memory storing instructions that, when executed by the processor, cause the processor to perform operations (paragraph 0049: “The ROM 1006 is used to store instructions and perhaps data that are read during program execution”) comprising: receiving the 2D image of the 3D space from the optical camera (paragraph 0009: “receive a first image data set…from the camera”); identifying, in the 2D image: a virtual image of the 3D space generated by the optical instrument refracting and/or reflecting the light (paragraph 0031: “data set may be split into two regions: one region may correspond to the image captured in the direct view 150 of the camera; the other region may correspond to the reflected view 160, 170 from the mirror”), and a first object depicting a subject disposed in the 3D space (FIG. 4A, paragraph 0035: “the camera and computer have located fingertips at 410, 412, 414, 416, and 418 in view 401”) from a first direction extending from the optical camera to the subject (paragraph 0009: “a camera with a field of view oriented in a first direction”); identifying, in the virtual image: a second object depicting the subject disposed in the 3D space (FIG. 4B, paragraph 0035: “The corresponding fingertips have been located at 420, 422, 424, 426 and 428 in view 402”) from a second direction extending from the optical camera to the subject via the optical instrument (paragraph 0009: “a mirror with a field of reflection oriented in a second direction”), the second direction being different than the first direction (paragraph 0011: “the camera has two views of the hand from two different angles”); and generating a 3D image depicting the subject based on the angle (paragraph 0042: “The method 800 may be configured to create a three dimensional representation of the object…Using a known distance between the camera and the mirror, and a known angle between the field of view of the camera and the field of reflection of the mirror”), the first object, and the second object (paragraphs 0009, 0042: “create a three dimensional representation of the object using at least the first image data set”); or determining a location of the subject in the 3D space based on the angle (paragraph 0011: “the method includes tracking motion in three dimensions of at least one location on the object…Using the known distance between the camera and the mirror, and the known angle between the field of view of the camera and the field of reflection of the mirror”), the first object, and the second object (paragraphs 0011, 0036: “The triangulation method may be used to compute the 3D location 510…location 510 may simply be the intersection of the two rays 506 and 508 that connect the camera optical center 502 (as well as the virtual camera optical center 504) and the detected fingertip in each view”). However, Lv fails to disclose the optical instrument includes one or more fiducial markers physically disposed on a surface thereof; and identifying, in the virtual image: the one or more fiducial markers physically disposed on the surface and projected into the virtual image by the optical instrument; and an angle of the surface based on the identified fiducial markers. In the related art of mapping using fiducial markers, Wells discloses the optical instrument includes one or more fiducial markers physically disposed on a surface thereof (Wells FIG. 3: fiducial 315 within the pattern of film 311; col 1 lines 53-59: “a film may be disposed on the reflective surface of the windshield within the HUD patch including one or more identification features discernable from the HUD patch image”); and identifying, in the virtual image: the one or more fiducial markers physically disposed on the surface (Wells col 10 lines 34-54: “The patterns are discernable by image sensor 151 which captures the HUD patch image”) and projected into the virtual image by the optical instrument (Wells col 10 lines 34-54: “the HUD patch image includes the reflection off the windshield 116 of the calibration array and the film reflection and/or fluorescence”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Lv to incorporate the teachings of Wells to evaluate and counteract the distortion effects of the reflective surface (Wells col 1 lines 53-59). However, Lv, modified by Wells, still fails to disclose identifying, in the virtual image: an angle of the surface based on the identified fiducial markers. In the related art of mapping using fiducial markers, Xu discloses identifying, in the virtual image: an angle of the surface based on the identified fiducial markers (Xu paragraph 0035: “within the captured image…a fiducial marker may include one or more shapes within the fiducial marker that appear differently when rotated or transformed so as to indicate a degree of rotation and/or transformation upon detection”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Lv and Wells to incorporate the teachings of Xu to consistently present the virtual environment across devices (Xu paragraph 0004). Regarding claim 2, Lv, modified by Wells and Xu, discloses the imaging system claimed in claim 1, wherein the optical instrument comprises at least one of a mirror or a lens (Lv paragraph 0029: “a mirror 140”). Regarding claim 3, Lv, modified by Wells and Xu, discloses the imaging system claimed in claim 1, wherein the 2D image depicts a third object depicting the one or more fiducial markers overlaid on the virtual image (Wells col 8 lines 10-28: “the center of the virtual image generator 105 preferably aligns with the center of the HUD patch…such adjustment may be effected through image sensor 151 feedback of alignment of the centering image 317 generated by the PGU 104 [to] the known center feature or fiducial 315 within the pattern of film 311”); and the location is determined based on the third object depicting the one or more fiducial markers (Xu paragraphs 0018, 0021: “Once detected, the second device derives three-dimensional (3D) coordinates of the fiducial points…based on the correspondence, the second device can derive its pose (T21 ) in the first coordinate system” where pose is defined by position and orientation). Regarding claim 4, Lv, modified by Wells and Xu, discloses the imaging system claimed in claim 3, wherein generating the 3D image based on the third object depicting the one or more fiducial markers comprises: determining a distance between the optical camera and the optical instrument based on a relative size of the one or more fiducial markers in the 2D image with respect to a known size of the one or more fiducial markers (Xu paragraph 0047: “the distance between two non-adjacent squares [of the fiducial marker] may be known to the device. The device may calculate the difference between the known distance and the distance detected in a captured image. The larger the difference between the known value and the distance calculated from the image, the further away the camera may be from the fiducial marker”); determining an orientation of the surface of the optical instrument based on a relative shape of the one or more fiducial markers in the 2D image with respect to a known shape of the one or more fiducial markers (Xu paragraph 0035: “within the captured image…a fiducial marker may include one or more shapes within the fiducial marker that appear differently when rotated or transformed so as to indicate a degree of rotation and/or transformation upon detection”); and generating the 3D image based on the distance and the orientation (Lv paragraph 0042: “The method 800 may be configured to create a three dimensional representation of the object…Using a known distance between the camera and the mirror, and a known angle between the field of view of the camera and the field of reflection of the mirror”). Regarding claim 5, Lv, modified by Wells and Xu, discloses the imaging system claimed in claim 3, wherein determining the location based on the third object depicting the one or more fiducial markers comprises: determining a distance between the optical camera and the optical instrument based on a relative size of the one or more fiducial markers in the 2D image with respect to a known size of the one or more fiducial markers (Xu paragraph 0047: “the distance between two non-adjacent squares [of the fiducial marker] may be known to the device. The device may calculate the difference between the known distance and the distance detected in a captured image. The larger the difference between the known value and the distance calculated from the image, the further away the camera may be from the fiducial marker”); determining an orientation of the surface of the optical instrument based on a relative shape of the one or more fiducial markers in the 2D image with respect to a known shape of the one or more fiducial markers (Xu paragraph 0035: “within the captured image…a fiducial marker may include one or more shapes within the fiducial marker that appear differently when rotated or transformed so as to indicate a degree of rotation and/or transformation upon detection”); and determining the location based on the distance and the orientation (Lv paragraph 0011: “the method includes tracking motion in three dimensions of at least one location on the object…Using the known distance between the camera and the mirror, and the known angle between the field of view of the camera and the field of reflection of the mirror”). Regarding claim 6, Lv, modified by Wells and Xu, discloses the imaging system claimed in claim 3, wherein the optical camera comprises an infrared (IR) camera (Lv paragraph 0029: “a single camera 110, which may be an infrared (IR) camera”), the 2D image comprises an IR image (Lv paragraph 0041: “images of infrared radiation”), and the one or more fiducial markers comprise an IR pattern disposed on the surface of the optical instrument (Wells col 7 lines 30-40: “the windshield 116 may have disposed thereon a selectively reflective film 311 covering a portion of the HUD patch…Film 311 may be selectively reflective of light, preferably light outside the visible spectrum, and more particularly IR light”). Regarding claim 7, Lv, modified by Wells and Xu, discloses the imaging system claimed in claim 3, wherein the one or more fiducial markers comprise an ArUco code (Xu FIG. 2: fiducial markers 208 and 212). Regarding claim 12, Lv, modified by Wells and Xu, discloses the imaging system claimed in claim 1, the subject being a first subject, wherein the operations further comprise identifying, in the virtual image, a third object depicting one or more additional fiducial markers disposed on the first subject or on a second subject (Lv paragraph 0028: “the system may include an efficient algorithm to track multiple fingers in 3D”) in the 3D space (Xu paragraph 0021: “The second device may capture a single or a sequence of images of the fiducial marker or the marker image displayed by the first device…Based on the image, the second device detects feature points from the fiducial marker or the marker image”), and the location is determined based on the third object (Xu paragraphs 0018, 0021: “Once detected, the second device derives three-dimensional (3D) coordinates of the fiducial points…based on the correspondence, the second device can derive its pose (T21 ) in the first coordinate system” where pose is defined by position and orientation). Regarding claim 13, it is the corresponding computing system to the imaging system claimed in claim 1. Therefore, Lv, modified by Wells and Xu, discloses the limitations of claim 13 as it does the limitations of claim 1. Regarding claim 16, Lv, modified by Wells and Xu, discloses the computing system claimed in claim 13, the subject being a first subject, wherein the operations further comprise identifying, in the virtual image, a third object depicting one or more additional fiducial markers disposed on the first subject or on a second subject (Lv paragraph 0028: “the system may include an efficient algorithm to track multiple fingers in 3D”) in the 3D space (Xu paragraph 0021: “The second device may capture a single or a sequence of images of the fiducial marker or the marker image displayed by the first device…Based on the image, the second device detects feature points from the fiducial marker or the marker image”), and generating the 3D image based on the third object (Xu paragraph 0067: “an instance of a virtual object may be presented on a display of the second mobile device based on the coordinate-system transform”). Regarding claim 17, Lv, modified by Wells and Xu, discloses the computing system claimed in claim 16, wherein: the one or more additional fiducial markers are disposed on a support structure or a medical device (this limitation is disclosed in an alternative clause and thus, read only on the second limitation); or the one or more additional fiducial markers are displayed on a screen of the first subject or the second subject (Xu paragraphs 0020, 0023: “The first device may present a fiducial marker or a marker image on a display of the first device” and “may present a virtual object on display 108 of the first device 104 as if the virtual object was a physical object positioned within the real-world environment”). Regarding claim 21, Lv, modified by Wells and Xu, discloses the imaging system of claim 1, wherein the optical instrument is positioned such that: the surface of the optical instrument is within a field of view of the optical camera (Lv FIGs. 1A-B: mirror 140 is in the field of view of camera 110; paragraph 0030: “The mirror 140 may have a field of reflection arranged to overlap with the field of view of the camera 110”), and the surface is facing a portion of the subject that is blocked from the field of view of the optical camera (Lv FIG. 2: the reflected view 220 shows a side of the hand that cannot be seen from direct view 210). Regarding claim 26, Lv, modified by Wells and Xu, discloses the imaging system of claim 1, wherein the operations further comprise determining a difference between a configuration of the one or more fiducial markers identified in the 2D image and a known configuration of the one or more fiducial markers stored in the memory (Xu paragraph 0047: “the distance between two non-adjacent squares [of the fiducial marker] may be known to the device. The device may calculate the difference between the known distance and the distance detected in a captured image”); determining a distance between the subject and the optical instrument based on the difference (Xu paragraph 0047: “The larger the difference between the known value and the distance calculated from the image, the further away the camera may be from the fiducial marker”); and determining an orientation of the surface of the optical instrument based on a relative shape of the one or more fiducial markers in the 2D image with respect to a known shape of the one or more fiducial markers (Xu paragraph 0035: “within the captured image…a fiducial marker may include one or more shapes within the fiducial marker that appear differently when rotated or transformed so as to indicate a degree of rotation and/or transformation upon detection”), wherein the 3D image is generated based on the distance and the orientation (Lv paragraph 0042: “The method 800 may be configured to create a three dimensional representation of the object…Using a known distance between the camera and the mirror, and a known angle between the field of view of the camera and the field of reflection of the mirror”). Regarding claim 27, Lv, modified by Wells and Xu, discloses the imaging system of claim 1, wherein the operations further comprise determining a difference between a configuration of the one or more fiducial markers identified in the 2D image and a known configuration of the one or more fiducial markers stored in the memory (Xu paragraph 0047: “the distance between two non-adjacent squares [of the fiducial marker] may be known to the device. The device may calculate the difference between the known distance and the distance detected in a captured image”); determining a distance between the subject and the optical instrument based on the difference (Xu paragraph 0047: “The larger the difference between the known value and the distance calculated from the image, the further away the camera may be from the fiducial marker”); and determining an orientation of the surface of the optical instrument based on a relative shape of the one or more fiducial markers in the 2D image with respect to a known shape of the one or more fiducial markers (Xu paragraph 0035: “within the captured image…a fiducial marker may include one or more shapes within the fiducial marker that appear differently when rotated or transformed so as to indicate a degree of rotation and/or transformation upon detection”), wherein the location of the subject in the 3D space is determined based on the distance and the orientation (Lv paragraph 0011: “the method includes tracking motion in three dimensions of at least one location on the object…Using the known distance between the camera and the mirror, and the known angle between the field of view of the camera and the field of reflection of the mirror”). Claim(s) 8 and 10 are rejected under 35 U.S.C. 103 as being unpatentable over Lv, Wells and Xu in view of Gallagher et al. (US 9992480). Regarding claim 8, Lv, modified by Wells and Xu, discloses the imaging system claimed in claim 1, wherein the operations further comprise generating the control signal based on the 3D image (Lv FIG. 8, paragraph 0046: “The method 800 is configured to control a graphical user interface by inputting data derived from the tracking motion in three dimensions at 818”). However, Lv fails to explicitly disclose a transceiver communicatively coupled to the processor and configured to transmit a control signal to a robotic device. In the related art of using mirrors to capture images from multiple vantages, Gallagher discloses a transceiver (Gallagher FIG. 11, col 21 lines 8-10: a transceiver is a necessary component of network interface subsystem 1116 to communicate with other devices on the network) communicatively coupled to the processor (Gallagher FIG. 11: processor 1114) and configured to transmit a control signal to a robotic device (Gallagher col 20 lines 43-63: “all or aspects of control system 1060 may be implemented on one or more computing devices that are in wired and/or wireless communication with the robot 1020, such as computing device 1110”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have further modified Lv to incorporate the teachings of Gallagher to decrease the computational and/or human resources needed to navigate a robot (Gallagher col 6 lines 60-63). Regarding claim 10, Lv, modified by Wells and Xu, discloses the imaging system claimed in claim 1, the 2D image being a first 2D image, the virtual image being a first virtual image, the subject being a first subject, wherein the operations further comprise: performing an analysis, with the processor, of the first 2D image (Lv FIG. 8: 814); generating a control signal based on the analysis (Lv FIG. 8, paragraph 0046: “The method 800 is configured to control a graphical user interface by inputting data derived from the tracking motion in three dimensions at 818”); receiving a second 2D image of the 3D space from the optical camera (Lv paragraph 0009: “receive a first image data set…from the camera”) when the optical instrument is in the second position (Lv paragraph 0030: “The location and orientation of the mirror 140 may be adjusted in advance”); identifying, in the second 2D image, a second virtual image generated by the optical instrument refracting and/or reflecting the light (Lv paragraph 0031: “data set may be split into two regions: one region may correspond to the image captured in the direct view 150 of the camera; the other region may correspond to the reflected view 160, 170 from the mirror”); identifying, in the second 2D image, a third object depicting a second subject disposed in the 3D space (Lv FIG. 4A, paragraph 0035: “the camera and computer have located fingertips at 410, 412, 414, 416, and 418 in view 401”) from a third direction extending from the optical camera to the second subject (Lv paragraph 0009: “a camera with a field of view oriented in a first direction”); and identifying, in the second virtual image, a fourth object depicting the second subject disposed in the 3D space (Lv FIG. 4B, paragraph 0035: “The corresponding fingertips have been located at 420, 422, 424, 426 and 428 in view 402”) from a fourth direction extending from the optical camera to the second subject via the optical instrument in the second position (Lv paragraph 0009: “a mirror with a field of reflection oriented in a second direction”), wherein the fourth direction is different than the third direction (Lv paragraph 0011: “the camera has two views of the hand from two different angles”), and the 3D image depicts the second subject (Lv paragraph 0028: “the system may include an efficient algorithm to track multiple fingers in 3D”). However, Lv fails to disclose the optical instrument further comprises an actuator; providing the control signal to the actuator, the control signal causing the actuator to move the optical instrument from a first position to a second position; and the 3D image is generated by combining, with the processor, the first 2D image and the second 2D image. In related art, Gallagher discloses the optical instrument further comprises an actuator (Gallagher col 3 lines 53-58: “the first mirror is coupled to an adjustable arm of the robot, the actuators dictate an adjustable arm pose of the adjustable arm”); providing the control signal to the actuator (Gallagher col 8 lines 1-9: “provide control commands to actuators”), the control signal causing the actuator to move the optical instrument from a first position to a second position (Gallagher col 3 lines 53-58: “adjusting the first mirror pose of the first mirror to the adjusted pose includes actuating the actuators of the adjustable arm”); and the 3D image is generated by combining, with the processor, the first 2D image and the second 2D image (Gallagher col 12 lines 28-45: “may utilize multiple image pairs capturing one or more of the same points from different vantages to determine a depth value feature for a 3D coordinate…a first image of the image pair may be one captured by camera sensor 110A from a first vantage of camera sensor 110A and a second image of the image pair may be one captured by camera sensor 110E3 from a second vantage (corresponding to a particular adjustment of mirrors 122, 124)”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have further modified Lv to incorporate the teachings of Gallagher to enable multiple images of a portion of an environment to be captured from multiple vantages without necessitating adjusting the pose of one or more camera sensors and/or adjusting the pose of a housing that houses one or more camera sensors (Gallagher col 6 line 63 – col 7 line 1). Claim(s) 9 is rejected under 35 U.S.C. 103 as being unpatentable over Lv, Wells and Xu in view of Grimaud (US 2016/0125638). Regarding claim 9, Lv, modified by Wells and Xu, discloses the imaging system claimed in claim 1. However, Lv fails to disclose identifying, in the 2D image, a second virtual image of the 3D space generated by a second optical instrument refracting and/or reflecting the light; and identifying, in the second virtual image, a third object depicting the subject disposed in the 3D space from a third direction extending from the optical camera to the subject via the second optical instrument, the third direction being different than the first direction and the second direction, and the 3D image is generated based on the third object, or the location is determined based on the third object. In the related art of using mirrors to obtain 3D information of an object, Grimaud discloses identifying, in the 2D image (Grimaud paragraph 0005: “The captured image has (i) a direct view of the target object and (ii) at least one reflection producing one or more reflected views of the target object from at least one reflective surface”), a second virtual image of the 3D space generated by a second optical instrument refracting and/or reflecting the light (Grimaud paragraph 0012: “a second reflected view of the target object from the second reflective surface”); and identifying, in the second virtual image, a third object depicting the subject disposed in the 3D space from a third direction extending from the optical camera to the subject via the second optical instrument (Grimaud FIG. 8A, paragraph 0071: “two mirrors 820a-b have reflections 827a-b, each containing reflected view 832a-b of a common region 861a-b on the surface of an object 830”), the third direction being different than the first direction and the second direction (Grimaud FIG. 8A, paragraph 0074: mirrors 820a-b have fields of view 821a-b extending in different directions), and the 3D image is generated based on the third object (Grimaud paragraph 0058: “There is an overlap 270 between the images of the mirrors 220a-b…enable a 3D shape of the target object 230 to be determined in the overlap region 270”), or the location is determined based on the third object (Grimaud paragraph 0064: “a 3D space location of a surface of any overlap regions, e.g., the back surface of the target object, can be precisely determined by image processor 150, 250”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have further modified Lv to incorporate the teachings of Grimaud to improve the digital representation by enabling all surfaces to be imaged and by decreasing the distortion at the image seams due to the increase in overlap between individual images (Grimaud paragraph 0050). Claim(s) 11 is rejected under 35 U.S.C. 103 as being unpatentable over Lv, Wells and Xu in view of Young et al. (US 2020/0110194). Regarding claim 11, Lv, modified by Wells and Xu, discloses the imaging system claimed in claim 1. However, Lv fails to disclose a transceiver configured to receive sensor data from a support structure of the individual, the sensor data indicating a weight of the individual on a surface of the support structure, wherein: the subject comprises an individual; and the operations further comprise determining a pose of the individual based on: the sensor data; and the 3D image or the location. In the related art of using sensors to determine person-specific information, Young discloses a transceiver configured to receive sensor data from a support structure of the individual (Young FIG. 2, paragraphs 0036-0037: “a local controller 200 can be wired or wirelessly connected to the load or other sensors 106 and collects and processes the signals from the load or other sensors 106…The controller 200 can be programmed to control other devices based on the processed data…the control of other devices also being wired or wireless”), the sensor data indicating a weight of the individual on a surface of the support structure (Young FIG. 1, paragraph 0032: “Because the system relies on the force of gravity to determine weight, sensors are required at each point where an object bears weight on the ground”), wherein: the subject comprises an individual (Young FIG. 1, paragraph 0035: “subject 10”); and the operations further comprise determining a pose of the individual based on: the sensor data (Young paragraph 0058: “Data from the load or other sensors can be used to detect actual body positions of the subject on the substrate, such as whether the subject is on its back, side, or stomach”); and the 3D image or the location (Young paragraph 0004: “determining a body position of the item on the substrate based at least the determined relationships between the multiple sensors, the location of the subject, and the angular orientation of the item”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have further modified Lv to incorporate the teachings of Young to prevent falls or to determine if a subject should be turned to avoid bed sores in a medical setting (Young paragraphs 0057-0058). Claim(s) 14-15 are rejected under 35 U.S.C. 103 as being unpatentable over Lv, Wells and Xu in view of Jansen et al. (NPL "Context aware inactivity recognition for visual fall detection"). Regarding claim 14, Lv, modified by Wells and Xu, discloses the computing system claimed in claim 13. However, Lv fails to explicitly disclose the subject comprises an individual; and the operations further comprise determining a pose of the individual based on the 3D image. In the related art of automatic fall detection, Jansen discloses the subject comprises an individual (Jansen pages 1-2: “the subject” is also referred to as “the patient” or “the person”); and the operations further comprise determining a pose of the individual based on the 3D image (Jansen page 2: “with depth information from the 3D camera images in order to obtain the location of the person in the room, together with the 3D orientation vector of its torso”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have further modified Lv to incorporate the teachings of Jansen to apply 3D camera technology to fall detection so that the fall detection does not rely on the appropriate operation of a wearable device by its user (Jansen page 1). Regarding claim 15, Lv, modified by Wells, Xu and Jansen, discloses the computing system claimed in claim 14, wherein the operations further comprise: determining a condition of the individual based on the pose (Jansen page 2: “Given the sequence of the person's location in the room in the subsequent video frames, inactivity is detected and used as a second trigger. Both the person's orientation and the detection of inactivity are fed into a context model which decides whether or not to trigger an alarm call”), the condition comprising at least one of a bed exit, a fall (Jansen page 1: the alarm is triggered upon fall detection), or a posture associated with aspiration. Claim(s) 22-25 and 29 are rejected under 35 U.S.C. 103 as being unpatentable over Lv, Wells and Xu in view of Tran (US 2012/0092156). Regarding claim 22, Lv, modified by Wells and Xu, discloses the imaging system of claim 1, wherein the location of the subject in the 3D space is determined based on identifying the one or more fiducial markers in the 2D image (Xu paragraphs 0018, 0021: “Once detected, the second device derives three-dimensional (3D) coordinates of the fiducial points…based on the correspondence, the second device can derive its pose (T21 ) in the first coordinate system” where pose is defined by position and orientation). However, Lv fails to disclose receiving additional data from a component of the system that is separate from the optical camera and the optical instrument; and confirming a position or an orientation of the subject at the location based on the additional data. In the related art of patient monitoring, Tran discloses receiving additional data from a component of the system that is separate from the optical camera and the optical instrument (Tran paragraph 0045: “The mesh network includes an in-door positioning 8B” that receives “RSSI and accelerometer data”); and confirming a position or an orientation of the subject at the location based on the additional data (Tran paragraph 0045: “RF signal strength RSSI reading is determined…are used to determine the wearer's location”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have further modified Lv to incorporate the teachings of Tran to provide improved position determination accuracy and resistance to environmental variability (Tran paragraph 0048). Regarding claim 23, Lv, modified by Wells, Xu and Tran, discloses the imaging system of claim 22, wherein the additional data comprises non-image data (Tran paragraph 0045: “RF signal strength RSSI reading”). Regarding claim 24, Lv, modified by Wells, Xu and Tran, discloses the imaging system of claim 23, the operations further comprising determining a health risk associated with the subject based at least in part on the position or the orientation (Tran paragraph 0149: “The system determines if patient needs assistance based on in-door position, fall detection and vital parameter”). Regarding claim 25, Lv, modified by Wells, Xu and Tran, discloses the imaging system of claim 24, wherein the position or the orientation comprises a first position or orientation, the operations further comprising: determining a second position or orientation of the subject different from the first position or orientation (Lv paragraphs 0041-0043: “the differences between the first image data set and the second image data set may represent changes in object (e.g. hand) location over time…to track motion in three dimensions at 816 by tracking locations on a human hand”); and determining the health risk associated with the subject based at least in part on the first position or orientation and the second position or orientation (Tran paragraph 0017: “refining identification of the activity, including analyzing a rate of change of a location of the object…can send an alert upon identification of predetermined set of behavior changes”). Regarding claim 29, Lv, modified by Wells, Xu and Tran, discloses the imaging system of claim 24, wherein the 2D image illustrates the subject disposed on a support structure (Tran paragraph 0074: “The system of sensors the patient monitoring system can determine… whether the user…has failed to get out of bed”), the imaging system further comprises a temperature sensor associated with the support structure (Tran paragraph 0067: “a temperature sensor can be provided on the chair, sofa, couch, or bed to detect the temperature of the patient”), and the additional data comprises a temperature detected by the temperature sensor, such that that the position or the orientation is confirmed based on the temperature (Tran paragraphs 0067, 0073: “and transmit the information over the mesh network” in which “The patient monitoring system integrates sensor data from different activity domains to make a number of determinations…One activity domain determination within the patient monitoring system includes movement of the person being monitored”). Claim(s) 28 is rejected under 35 U.S.C. 103 as being unpatentable over Lv, Wells, Xu and Tran in view of Shen et al. (US 2022/0248979). Regarding claim 28, Lv, modified by Wells, Xu and Tran, discloses the imaging system of claim 24, wherein the 2D image illustrates the subject disposed on a support structure (Tran paragraph 0074: “The system of sensors the patient monitoring system can determine… whether the user…has failed to get out of bed”). However, Lv and Tran fail to disclose a sensor configured to detect an angle of at least a portion of the support structure, and the additional data comprises an angle of the support structure detected by the sensor, such that the position or the orientation is confirmed based on the angle. In the related art of patient monitoring, Shen discloses a sensor configured to detect an angle of at least a portion of the support structure (Shen paragraph 0031: “determine or access an orientation of a physical support apparatus comprising a bed, table, chair, or other structure configured to support the person at least partially off a floor or ground”), and the additional data comprises an angle of the support structure detected by the sensor, such that the position or the orientation is confirmed based on the angle (Shen paragraph 0031: “calculate an orientation of the person relative to the physical support apparatus based on (a) the orientation of the physical support structure”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Lv and Tran to incorporate the teachings of Shen to monitor and coordinate patient turning efforts that enable more efficient and effective patient care (Shen paragraphs 0012, 0092). Response to Arguments Applicant's arguments with respect to independent claim(s) 1 and 13 have been fully considered but they are not persuasive. Regarding the argument that “neither Lv nor Wells teach or suggest…identifying in a virtual image the one or more fiducial markers physically disposed on the surface and projected into the virtual image by the optical instrument”, Wells teaches an image sensor captures fiducial markers disposed on a windshield that are illuminated and reflected off the windshield (Wells FIG. 5; col 10 lines 34-54). Regarding the arguments that “neither Lv nor Wells teach or suggest…an angle of the surface based on the identified fiducial markers…generating a 3D image depicting the subject based on the angle…or determining a location of the subject in the 3D space based on the angle” and “Xu do not include…one or more fiducial markers physically disposed on a surface thereof”, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). Wells teaches fiducial markers physically disposed on a windshield. Xu teaches identifying an angle of the surface based on identified fiducial markers. Lv teaches generating a 3D image depicting the subject or determining a location of the subject in the 3D space based on the angle between the field of view of the camera and the field of reflection of the mirror. Therefore, the combination of Lv, Wells and Xu teaches the limitations in question, as delineated in the above rejection. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHRISTINE ZHAO whose telephone number is (703)756-5986. The examiner can normally be reached Monday - Friday 9:00am - 5:00pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Bee can be reached on (571)270-5183. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /C.Z./Examiner, Art Unit 2677 /ANDREW W BEE/Supervisory Patent Examiner, Art Unit 2677
Read full office action

Prosecution Timeline

Aug 11, 2022
Application Filed
Mar 07, 2025
Non-Final Rejection — §103, §112
Jun 24, 2025
Examiner Interview Summary
Jul 17, 2025
Response Filed
Sep 12, 2025
Final Rejection — §103, §112
Dec 18, 2025
Request for Continued Examination
Jan 16, 2026
Response after Non-Final Action
Jan 24, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12536695
TRENCH PROFILE DETERMINATION BY MOTION
2y 5m to grant Granted Jan 27, 2026
Patent 12524883
Systems and Methods for Assessing Cell Growth Rates
2y 5m to grant Granted Jan 13, 2026
Patent 12518391
SYSTEM AND METHOD FOR IMPROVING IMAGE SEGMENTATION
2y 5m to grant Granted Jan 06, 2026
Patent 12511900
System and Method for Impact Detection and Analysis
2y 5m to grant Granted Dec 30, 2025
Patent 12493946
APPARATUS AND METHOD FOR VERIFYING OPTICAL FIBER WORK USING ARTIFICIAL INTELLIGENCE
2y 5m to grant Granted Dec 09, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
61%
Grant Probability
99%
With Interview (+58.3%)
3y 0m
Median Time to Grant
High
PTA Risk
Based on 18 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month