Prosecution Insights
Last updated: April 19, 2026
Application No. 17/734,656

Robotic Ventilation System for Correct Mask Placement

Non-Final OA §103§112
Filed
May 02, 2022
Examiner
PATEL, ROHAN DEEP
Art Unit
3785
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
Restful Robotics Inc.
OA Round
1 (Non-Final)
57%
Grant Probability
Moderate
1-2
OA Rounds
3y 7m
To Grant
99%
With Interview

Examiner Intelligence

Grants 57% of resolved cases
57%
Career Allow Rate
12 granted / 21 resolved
-12.9% vs TC avg
Strong +45% interview lift
Without
With
+45.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
49 currently pending
Career history
70
Total Applications
across all art units

Statute-Specific Performance

§101
4.8%
-35.2% vs TC avg
§103
55.4%
+15.4% vs TC avg
§102
22.3%
-17.7% vs TC avg
§112
16.4%
-23.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 21 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Claims 1, 35, 66-67, 70, and 73 objected to because of the following informalities: Claim 1 line 6: “a kinematic mount” should read “the kinematic mount”. Claim 35 line 6: “a fiducial marker” should read “the fiducial marker”. Claim 66 line 4: “a kinematic mount” should read “the kinematic mount”. Claim 66 line 8: “a kinematic mount” should read “the kinematic mount”. Claim 66 line 10: “the user” should read “a user”. Claim 66 line 12: “a fiducial marker” should read “the fiducial marker”. Claim 66 line 22: “the user’s face” should read “a user’s face”. Claim 66 line 26: ”an appropriate contact force and an appropriate position” should read “the appropriate contact force and the appropriate position”. Claim 66 line 33: “a face” should read “the face”. Claim 67 line 6: “a user” should read “the user”. Claim 67 line 7: “a kinematic mount” should read “the kinematic mount”. Claim 70 line 4: “a fiducial marker” should read “the fiducial marker”. Claim 70 line 7: “a user” should read “the user”. Claim 73 line 6: “a kinematic mount” should read “the kinematic mount”. Claim 73 line 8: “a user” should read “the user”. Claim 73 line 27: ”an appropriate contact force and an appropriate position” should read “the appropriate contact force and the appropriate position”. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 25, 56, 66, and 73 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. The term “safe and effective” in claim 25 is a relative term which renders the claim indefinite. The term “safe and effective” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. The term “safe and effective” in claim 56 is a relative term which renders the claim indefinite. The term “safe and effective” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. The term “safe and effective” in claim 66 is a relative term which renders the claim indefinite. The term “safe and effective” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. The term “safe and effective” in claim 73 is a relative term which renders the claim indefinite. The term “safe and effective” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1, 8-17, 34-35, 43-48 and 65 are rejected under 35 U.S.C. 103 as being unpatentable over Huang et al. CN 108969858A (page numbers refer to Translation), in view of Tabandeh et al. 2017/0245946. Regarding claim 1, Huang discloses a robotic ventilation system for correct mask placement (The abstract states that “The invention claims a full-automatic oxygen feeding robot”) comprising: a robot (move robot 5) comprising an arm (arm 11), the arm comprising a flange, the flange coupled to an end of the arm (end effector at the end of arm 11 containing mask 12), the arm configured to move the flange along a degree of freedom (Page 7 of translated Huang says “the mechanical arm 11 is a six-series mechanical arm or seven serial mechanical arm, end effector can be delivered to any point of the space.”); a mask coupled to the flange (mask 12), the mask configured to deliver gas to a user (described as an oxygen mask), a ventilator coupled to the mask, the ventilator configured to deliver the gas to the mask (oxygen supply assembly 7 is coupled to mask 12 as depicted in figure 7); a gas tube coupled to both the mask and the ventilator (gas line depicted in figure 3), the gas tube configured to carry gas between the ventilator and the mask (the gas tube would carry gas from the oxygen source to the mask); a controller configured to change a pose of the mask (Page 5 states that “MCU control centre 16 control mechanical arm 11 of the binocular camera 14 and the oxygen mask 12 to generally position the front human face.”), the controller further configured to control the delivery of the gas from the ventilator to the user (page 4 states “after adjusting the angle of the oxygen mask, oxygen mask on the face of human body, measuring the environmental conditions, after determining the safety releasing oxygen electromagnetic valve is opened that can realize oxygen supply, through said structure”); and a tracking system (binocular camera 14), the tracking system configured to capture image data of one or more of the mask and a face of the user (Page 5 states that “the camera shoots the face position picture, the picture is transmitted to the MCU control centre 16, starting the machine vision program, by face recognition, binocular distance and posture recognition step, combined with the ultrasonic sensor 13 ranging, finally identifying and locating the actual nose position”. As stated the tracking system including the ultrasonic transducer and camera in conjunction capture the position of the face of the user.), wherein the controller is configured to change the mask pose based at least in part on the image data (Page 5 states that “after adjusting the tilt angle of the oxygen mask 12, mechanical arm 11 drive the oxygen mask 12 gradually close to the face.”). Huang fails to teach wherein the arm further comprises a kinematic mount, the kinematic mount usable to do one or more of orient and locate the mask with respect to the flange; Tabandeh teaches an analogous robotically controlled surgical optical tracker that does teach wherein the arm (Figure 5, arm connected above 103) further comprises a kinematic mount (joint 103), the kinematic mount usable to do one or more of orient and locate an attached component with respect to the flange (0048 states “may have a base 105 and various joints and links 102, 103 to provide one or more degrees of freedom to articulate a tool 106 attached to an end effector flange 104.”). It would have been prima facie obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to modify the robotic system of Huang with the teachings of Tabandeh and include a kinematic mount as it would allow for a multiple degree of freedom movement of the mask (0048). Modified Huang would now teach, the kinematic mount usable to do one or more of orient and locate the mask with respect to the flange. Regarding claim 8, modified Huang teaches the system of claim 1, wherein the mounting feature comprises a kinematic mount (0048 of Tabandeh states “may have a base 105 and various joints and links 102, 103 to provide one or more degrees of freedom to articulate a tool 106 attached to an end effector flange 104.”). Regarding claim 9, modified Huang teaches the system of claim 8, wherein a spatial alignment of the mask with respect to the kinematic mount is known a priori (0017 of Tabandeh states “A process to maintain a line of sight (LOS) between a fiducial marker array and one or more optical receivers is provided that includes: positioning one or more optical receivers in an initial location that minimizes LOS disruption within a system; articulating the fiducial marker array with the one or more movable joints to an initial orientation that optimizes the fiducial marker array within a field of view of the one or more optical receivers; recording an initial position and orientation (POSE) of the fiducial marker array;”). Regarding claim 10, modified Huang teaches the system of claim 8, wherein the mounting feature comprises an opening (Figure 3 of Tabandeh depicts an opening within 103, that allows for the arm to be placed through.). Regarding claim 11, Huang teaches the system of claim 1, wherein the robot comprises the tracking system (the tracking system 14 is located on the end effector as depicted in figure 3.). Regarding claim 12, Huang teaches the system of claim 1, wherein the tracking system is configured to identify a face feature point (Page 5 states that “the picture is transmitted to the MCU control centre analyzing and locating the actual nose position;”). Regarding claim 13, Huang teaches the system of claim 12, wherein the face feature point comprises one or more of an eye center point, a nose center point, a mouth lateral point, and a mouth center point (Page 5 states that “the picture is transmitted to the MCU control centre analyzing and locating the actual nose position;”). Regarding claim 14, Huang teaches the system of claim 13, wherein the system is configured to move the arm so as to bring the mask toward the face of the user (Page 5 states that “mechanical arm 11 drive the oxygen mask 12 gradually close to the face”). Regarding claim 15, Huang teaches the system of claim 14, wherein the system is configured to move the arm so as to bring the mask into contact with the face of the user (Page 5 states that “when oxygen mask 12 attached to the human face, on the oxygen mask 19 alarm on the contact sensor 12, mechanical arm 11 stops the operation.”). Regarding claim 16, Huang teaches the system of claim 12, wherein the system moves the arm toward the face of the user after the face feature point is identified (Page 5 states that “the position of the human face information transmitted to the MCU control centre 16, MCU control centre 16 control mechanical arm 11 of the binocular camera 14 and the oxygen mask 12 to generally position the front human face.”). Regarding claim 17, Huang teaches the system of claim 16, wherein the system moves the arm so as to bring the mask into contact with the face of the user after the face feature point is identified (Page 5 states “when the oxygen mask attached to the human face located on the oxygen mask of contact sensor alarm, stops the action of the mechanical arm,”). Regarding claim 34, Huang teaches the system of claim 1, further comprising a biometric sensor system, the biometric sensor system comprising a biometric sensor (Camera and ultrasonic sensor in conjunction), wherein the controller is further configured to change the pose of the mask based at least in part on biometric data from the biometric sensor system (Page 4 states that “starting the machine vision program, by face recognition, binocular distance and posture recognition step, combined with the ultrasonic sensor 13 ranging, finally identifying and locating the actual nose position; combine MEMS tilt angle measured by the tilt angle data, calculating the arrival of oxygen mask position and real nostril positions between tilt angle and distance, after adjusting the tilt angle of the oxygen mask 12, mechanical arm 11 drive the oxygen mask 12 gradually close to the face. in the process”). Regarding claim 35, Huang teaches a robotic ventilation system for correct mask placement (The abstract states that “The invention claims a full-automatic oxygen feeding robot”) comprising: a robot (move robot 5) comprising an arm (arm 11), the arm comprising a flange, the flange coupled to an end of the arm (end effector at the end of arm 11 containing mask 12), the arm configured to move the flange along a degree of freedom (Page 7 says “the mechanical arm 11 is a six-series mechanical arm or seven serial mechanical arm, end effector can be delivered to any point of the space.”); a mask coupled to the flange (mask 12), the mask configured to deliver gas to a user (described as an oxygen mask), a ventilator coupled to the mask, the ventilator configured to deliver the gas to the mask (oxygen supply assembly 7 is coupled to mask 12 as depicted in figure 7); a gas tube coupled to both the mask and the ventilator (gas line depicted in figure 3), the gas tube configured to carry gas between the ventilator and the mask (the gas tube would carry gas from the oxygen source to the mask); a controller configured to change a pose of the mask (Page 5 states that “MCU control centre 16 control mechanical arm 11 of the binocular camera 14 and the oxygen mask 12 to generally position the front human face.”), the controller further configured to control the delivery of the gas from the ventilator to the user (page 4 states “after adjusting the angle of the oxygen mask, oxygen mask on the face of human body, measuring the environmental conditions, after determining the safety releasing oxygen electromagnetic valve is opened that can realize oxygen supply, through said structure”); and a tracking system (binocular camera 14), the tracking system configured to capture image data of one or more of the mask and a face of the user (Page 5 states that “the camera shoots the face position picture, the picture is transmitted to the MCU control centre 16, starting the machine vision program, by face recognition, binocular distance and posture recognition step, combined with the ultrasonic sensor 13 ranging, finally identifying and locating the actual nose position”. As stated the tracking system including the ultrasonic transducer and camera in conjunction capture the position of the face of the user.), wherein the controller is configured to change the mask pose based at least in part on the image data (Page 5 states that “after adjusting the tilt angle of the oxygen mask 12, mechanical arm 11 drive the oxygen mask 12 gradually close to the face.”). Huang fails to teach the mask comprising a fiducial marker and wherein the tracking system is configured to track the fiducial marker. Tabandeh teaches an analogous robotically controlled surgical optical tracker that does teach a surgical tool (end effector with tool 106) requiring a fiducial marker (0053 states “A zoomed in view of the end effector of the robot 123 is illustratively shown in FIG. 3. A fiducial marker array 107 may be attached to and/or fixed in a position and orientation to a movable joint 119 “) and wherein the tracking system is configured to track the fiducial marker (0047 states “system consisting of at least one fiducial marker array, an optical tracking system with optical receivers, and a movable joint in mechanical communication with at least one of the fiducial marker array or optical receivers.”). It would have been prima facie obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to modify the mask of Huang with the teachings of the tool of Tabandeh and include a fiducial marker as this allows for a determination of an ideal field of view of the part of the patient that is being interacted with, and provides an optimization of the tracking system (0047 and 0056). Modified Huang now includes the mask requiring a fiducial marker. Regarding claim 42, Huang teaches the system of claim 41, wherein the robot comprises the tracking system (the tracking system 14 is located on the end effector as depicted in figure 3.). Regarding claim 43, Huang teaches the system of claim 42, wherein the tracking system is configured to identify a face feature point (Page 5 states that “the picture is transmitted to the MCU control centre analyzing and locating the actual nose position;”). Regarding claim 44, Huang teaches the system of claim 43, wherein the face feature point comprises one or more of an eye center point, a nose center point, a mouth lateral point, and a mouth center point (Page 5 states that “the picture is transmitted to the MCU control centre analyzing and locating the actual nose position;”). Regarding claim 45, Huang teaches the system of claim 44, wherein the system is configured to move the arm so as to bring the mask toward the face of the user (Page 5 states that “mechanical arm 11 drive the oxygen mask 12 gradually close to the face”). Regarding claim 46, Huang teaches the system of claim 45, wherein the system is configured to move the arm so as to bring the mask into contact with the face of the user (Page 5 states that “when oxygen mask 12 attached to the human face, on the oxygen mask 19 alarm on the contact sensor 12, mechanical arm 11 stops the operation.”). Regarding claim 47, Huang teaches the system of claim 43, wherein the system moves the arm toward the face of the user after the face feature point is identified (Page 5 states that “the position of the human face information transmitted to the MCU control centre 16, MCU control centre 16 control mechanical arm 11 of the binocular camera 14 and the oxygen mask 12 to generally position the front human face.”). Regarding claim 48, Huang teaches the system of claim 43, wherein the system moves the arm so as to bring the mask into contact with the face of the user after the face feature point is identified (Page 5 states “when the oxygen mask attached to the human face located on the oxygen mask of contact sensor alarm, stops the action of the mechanical arm,”). Regarding claim 65, Huang teaches the system of claim 35, further comprising a biometric sensor system, the biometric sensor system comprising a biometric sensor (Camera and ultrasonic sensor in conjunction), wherein the controller is further configured to change the pose of the mask based at least in part on biometric data from the biometric sensor system (Page 4 states that “starting the machine vision program, by face recognition, binocular distance and posture recognition step, combined with the ultrasonic sensor 13 ranging, finally identifying and locating the actual nose position; combine MEMS tilt angle measured by the tilt angle data, calculating the arrival of oxygen mask position and real nostril positions between tilt angle and distance, after adjusting the tilt angle of the oxygen mask 12, mechanical arm 11 drive the oxygen mask 12 gradually close to the face. in the process”). Claims 2-7 and 36-41 are rejected under 35 U.S.C. 103 as being unpatentable over Huang and Tabandeh, further in view of Meger et al. 2018/0220897 Regarding claim 2, modified Huang teaches the system of claim 1, but fails to teach wherein the system is configured to correctly deliver the breathable gas to the user after detecting a state of the user. Meger discloses an analogous masked system (figures 13a-14b) that does teach wherein the system is configured to correctly deliver the breathable gas to the user after detecting a state of the user (0346 states “Upon detection that PAP is required, system 10 drives an air source 210 to apply air pressure to the mask via an air delivery tube 212”). It would have been prima facie obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to modify Huang with the teachings of Meger and include wherein the system is configured to correctly deliver the breathable gas to the user after detecting a state of the user as this allows for a selective treatment of sleep apnea without preventing the subject from falling asleep (0343). Regarding claim 3, modified Huang in view of Meger teaches the system of claim 2, wherein the state of the user comprises a requisite sleep state of the user (0343 states “Upon detecting that the subject has fallen asleep, the system activates the PAP device.”). Regarding claim 4, modified Huang in view of Meger teaches the system of claim 3, wherein the requisite sleep state is one or more of predetermined by the user (0331 states “pattern analysis module 16 of control unit 14 of patient monitoring system 10 includes protocol-input functionality 114 that is configured to receive an input that is indicative of a treatment protocol that has been assigned to the patient.”) and is determined by the system (0343 states “the system activates the PAP device a predefined period of time after the system identifies quiet breathing”). Regarding claim 5, modified Huang in view of Meger teaches the system of claim 4, wherein the system delivers the breathable gas to the user after the user falls asleep (0343 states “Upon detecting that the subject has fallen asleep, the system activates the PAP device.”). Regarding claim 6, modified Huang in view of Meger teaches the system of claim 4, wherein the system delivers the breathable gas to the user after fulfillment of a time-based condition (0343 states “the system activates the PAP device a predefined period of time after the system identifies quiet breathing”). Regarding claim 7, modified Huang in view of Meger teaches the system of claim 6, wherein the system delivers the breathable gas to the user after one or more of passage of a predetermined amount of time (0343 of Meger states “Upon detecting that the subject has fallen asleep, the system activates the PAP device.”) and after a predetermined time of day is reached (0180 states “the time of day, with respect to a patient's sleep cycle”). Regarding claim 36, modified Huang teaches the system of claim 35, but fails to teach wherein the system is configured to correctly deliver the breathable gas to the user after detecting a state of the user. Meger discloses an analogous masked system (figures 13a-14b) that does teach wherein the system is configured to correctly deliver the breathable gas to the user after detecting a state of the user (0346 states “Upon detection that PAP is required, system 10 drives an air source 210 to apply air pressure to the mask via an air delivery tube 212”). It would have been prima facie obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to modify Huang with the teachings of Meger and include wherein the system is configured to correctly deliver the breathable gas to the user after detecting a state of the user as this allows for a selective treatment of sleep apnea without preventing the subject from falling asleep (0343). Regarding claim 37, modified Huang in view of Meger teaches the system of claim 36, wherein the state of the user comprises a requisite sleep state of the user (0343 states “Upon detecting that the subject has fallen asleep, the system activates the PAP device.”). Regarding claim 38, modified Huang in view of Meger teaches the system of claim 37, wherein the requisite sleep state is one or more of predetermined by the user (0331 states “pattern analysis module 16 of control unit 14 of patient monitoring system 10 includes protocol-input functionality 114 that is configured to receive an input that is indicative of a treatment protocol that has been assigned to the patient.”) and is determined by the system (0343 states “the system activates the PAP device a predefined period of time after the system identifies quiet breathing”). Regarding claim 39, modified Huang in view of Meger teaches the system of claim 38, wherein the system delivers the breathable gas to the user after the user falls asleep (0343 states “Upon detecting that the subject has fallen asleep, the system activates the PAP device.”). Regarding claim 40, modified Huang in view of Meger teaches the system of claim 38, wherein the system delivers the breathable gas to the user after fulfillment of a time-based condition (0343 states “the system activates the PAP device a predefined period of time after the system identifies quiet breathing”). Regarding claim 41, modified Huang in view of Meger teaches the system of claim 40, wherein the system delivers the breathable gas to the user after one or more of passage of a predetermined amount of time (0343 of Meger states “Upon detecting that the subject has fallen asleep, the system activates the PAP device.”) and after a predetermined time of day is reached (0180 states “the time of day, with respect to a patient's sleep cycle”). Claims 67-72 are rejected under 35 U.S.C. 103 as being unpatentable over Huang and Tabandeh, further in view of Bowling et al. 2014/0039517 Regarding claim 67, Huang teaches a method for correct mask placement using a robotic ventilation system (abstract), comprising: capturing image data, the image data comprising a facial feature of a user (Page 5 states that “the camera shoots the face position picture, the picture is transmitted to the MCU control centre 16, starting the machine vision program, by face recognition, binocular distance and posture recognition step, combined with the ultrasonic sensor 13 ranging, finally identifying and locating the actual nose position”.), by a system comprising: a robot (move robot 5) comprising an arm (11), the arm comprising a flange, the flange coupled to an end of the arm (end effector at the end of arm 11 containing mask 12), the arm configured to move the flange along a degree of freedom (Page 7 of translated says “the mechanical arm 11 is a six-series mechanical arm or seven serial mechanical arm, end effector can be delivered to any point of the space.”); a mask coupled to the flange (mask 12), the mask configured to deliver gas to a user (described as an oxygen mask) a ventilator coupled to the mask, the ventilator configured to deliver the gas to the mask (oxygen supply assembly 7 is coupled to mask 12 as depicted in figure 7). a gas tube coupled to both the mask and the ventilator (gas line depicted in figure 3), the gas tube configured to carry gas between the ventilator and the mask (the gas tube would carry gas from the oxygen source to the mask); the controller further configured to control the delivery of the gas from the ventilator to the user (page 4 states “after adjusting the angle of the oxygen mask, oxygen mask on the face of human body, measuring the environmental conditions, after determining the safety releasing oxygen electromagnetic valve is opened that can realize oxygen supply, through said structure”); and a tracking system (binocular camera 14), the tracking system configured to capture image data of one or more of the mask and a face of the user (Page 5 states that “the camera shoots the face position picture, the picture is transmitted to the MCU control centre 16, starting the machine vision program, by face recognition, binocular distance and posture recognition step, combined with the ultrasonic sensor 13 ranging, finally identifying and locating the actual nose position”. As stated the tracking system including the ultrasonic transducer and camera in conjunction capture the position of the face of the user.), wherein the controller is configured to change the mask pose based at least in part on the image data (Page 5 states that “after adjusting the tilt angle of the oxygen mask 12, mechanical arm 11 drive the oxygen mask 12 gradually close to the face.”). tracking, by the system, the facial feature of the user (Page 5 states “the picture is transmitted to the MCU control centre analyzing and locating the actual nose position;”); determining, by the system, a face coordinate frame with respect to the user's face (page 6 states “the UWB positioning is finished according to the following steps, which lies in the three or more than three position point of UWB base station, reading the UWB tag body for three-dimensional positioning calculation; the drive movable wheel-type walking mechanism move toward the positioning position of UWB tags obtained, UWB base station in each position point reads the UWB tag, combining three dimensional space position of RSSI signal measuring method, calibration UWB tag, after repeated calibration UWB tag position, so the UWB tag position plus offset, calculating to obtain the rough position of human face”); Huang fails to explicitly teach wherein the arm further comprises a kinematic mount, the kinematic mount usable to do one or more of orient and locate the mask with respect to the flange and determining, by the system, a mask system coordinate frame with respect to the mask; determining a transform, by the system, between the face coordinate frame and the mask system coordinate frame; and storing the transform, by the system, as a spatial relationship between the facial coordinate system and the mask coordinate system, the spatial relationship configured for correct placement of the mask on the user's face. Tabandeh teaches an analogous robotically controlled surgical optical tracker that does teach wherein the arm (Figure 5, arm connected above 103) further comprises a kinematic mount (joint 103), the kinematic mount usable to do one or more of orient and locate an attached component with respect to the flange (0048 states “may have a base 105 and various joints and links 102, 103 to provide one or more degrees of freedom to articulate a tool 106 attached to an end effector flange 104.”). It would have been prima facie obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to modify the robotic system of Huang with the teachings of Tabandeh and include a kinematic mount as it would allow for a multiple degree of freedom movement of the mask (0048). Modified Huang would now teach , the kinematic mount usable to do one or more of orient and locate the mask with respect to the flange. Huang in view of Bowling still fails to teach determining, by the system, a mask system coordinate frame with respect to the mask; determining a transform, by the system, between the face coordinate frame and the mask system coordinate frame; and storing the transform, by the system, as a spatial relationship between the facial coordinate system and the mask coordinate system, the spatial relationship configured for correct placement of the mask on the user's face. Bowling teaches an analogous robotically controlled surgical optical tracker that does teach determining a transform, by the system, between the coordinate frame of the user’s anatomy and the surgical tool coordinate frame (0112 states “The position vector and the rotation matrix that define the relation of one coordinate system to another collectively form the homogenous transformation matrix. The symbol .sub.i.sup.i-1T is the notation for the homogenous transformation matrix that identifies the position and orientation of coordinate system i with respect to coordinate system i-1. Two components of the system that have their own coordinate systems are the bone tracker 212 and the tool tracker 214. In FIG. 12 these coordinate systems are represented as, respectively, bone tracker coordinate system BTRK and tool tracker coordinate system TLTR.”) and storing the transform, by the system, as a spatial relationship between the user’s coordinate system and the surgical tools coordinate system the spatial relationship configured for correct placement of the surgical tool on the user’s respective body part (0134 states “Coordinate transformer 272 is a software module that references the data that defines the relationship between the preoperative images of the patient and the patient tracker 212. Coordinate transformer 272 also stores the data indicating the pose of the instrument energy applicator 184 relative to the tool tracker 214.”). It would have been prima facie obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to modify the mask system of modified Huang with the teachings of Bowling and include determining, by the system, a mask system coordinate frame with respect to the mask; determining a transform, by the system, between the face coordinate frame and the mask system coordinate frame; and storing the transform, by the system, as a spatial relationship between the facial coordinate system and the mask coordinate system, the spatial relationship configured for correct placement of the mask on the user's face as this generates an accurate path leading to an accurate placement of the mask on the user (0009). Regarding claim 68, Huang teaches the method of claim 67, wherein the image data further comprises image data regarding a facial fiducial marker comprised in the user's face (UWB tag 1 is located on the user’s face). Regarding claim 69, Huang teaches the method of claim 68, wherein the tracking step further comprises tracking the facial fiducial marker (page 6 states “the UWB positioning is finished according to the following steps, which lies in the three or more than three position point of UWB base station, reading the UWB tag body for three-dimensional positioning calculation;”). Regarding claim 70, Huang teaches a method for correct mask placement using a robotic ventilation system (abstract), comprising: capturing image data, the image data comprising a facial feature of a user (Page 5 states that “the camera shoots the face position picture, the picture is transmitted to the MCU control centre 16, starting the machine vision program, by face recognition, binocular distance and posture recognition step, combined with the ultrasonic sensor 13 ranging, finally identifying and locating the actual nose position”.), by a system comprising: a robot (move robot 5) comprising an arm (11), the arm comprising a flange, the flange coupled to an end of the arm (end effector at the end of arm 11 containing mask 12), the arm configured to move the flange along a degree of freedom (Page 7 says “the mechanical arm 11 is a six-series mechanical arm or seven serial mechanical arm, end effector can be delivered to any point of the space.”); a mask coupled to the flange (mask 12), the mask configured to deliver gas to a user (described as an oxygen mask) a ventilator coupled to the mask, the ventilator configured to deliver the gas to the mask (oxygen supply assembly 7 is coupled to mask 12 as depicted in figure 7). a gas tube coupled to both the mask and the ventilator (gas line depicted in figure 3), the gas tube configured to carry gas between the ventilator and the mask (the gas tube would carry gas from the oxygen source to the mask); the controller further configured to control the delivery of the gas from the ventilator to the user (page 4 states “after adjusting the angle of the oxygen mask, oxygen mask on the face of human body, measuring the environmental conditions, after determining the safety releasing oxygen electromagnetic valve is opened that can realize oxygen supply, through said structure”); and a tracking system (binocular camera 14), the tracking system configured to capture image data of one or more of the mask and a face of the user (Page 5 states that “the camera shoots the face position picture, the picture is transmitted to the MCU control centre 16, starting the machine vision program, by face recognition, binocular distance and posture recognition step, combined with the ultrasonic sensor 13 ranging, finally identifying and locating the actual nose position”. As stated the tracking system including the ultrasonic transducer and camera in conjunction capture the position of the face of the user.), wherein the controller is configured to change the mask pose based at least in part on the image data (Page 5 states that “after adjusting the tilt angle of the oxygen mask 12, mechanical arm 11 drive the oxygen mask 12 gradually close to the face.”). tracking, by the system, the facial feature of the user (Page 5 states “the picture is transmitted to the MCU control centre analyzing and locating the actual nose position;”); determining, by the system, a face coordinate frame with respect to the user's face (page 6 states “the UWB positioning is finished according to the following steps, which lies in the three or more than three position point of UWB base station, reading the UWB tag body for three-dimensional positioning calculation; the drive movable wheel-type walking mechanism move toward the positioning position of UWB tags obtained, UWB base station in each position point reads the UWB tag, combining three dimensional space position of RSSI signal measuring method, calibration UWB tag, after repeated calibration UWB tag position, so the UWB tag position plus offset, calculating to obtain the rough position of human face”); Huang fails to explicitly teach the image data comprises data regarding a fiducial marker included in the mask wherein the tracking system tracks the fiducial marker and determining, by the system, a mask system coordinate frame with respect to the mask; determining a transform, by the system, between the face coordinate frame and the mask system coordinate frame; and storing the transform, by the system, as a spatial relationship between the facial coordinate system and the mask coordinate system, the spatial relationship configured for correct placement of the mask on the user's face. Tabandeh teaches an analogous robotically controlled surgical optical tracker that does teach a surgical tool (end effector with tool 106) requiring a fiducial marker (0053 states “A zoomed in view of the end effector of the robot 123 is illustratively shown in FIG. 3. A fiducial marker array 107 may be attached to and/or fixed in a position and orientation to a movable joint 119 “) and wherein the tracking system is configured to track the fiducial marker (0047 states “system consisting of at least one fiducial marker array, an optical tracking system with optical receivers, and a movable joint in mechanical communication with at least one of the fiducial marker array or optical receivers.”). It would have been prima facie obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to modify the mask of Huang with the teachings of the tool of Tabandeh and include a fiducial marker as this allows for a determination of an ideal field of view of the part of the patient that is being interacted with, and provides an optimization of the tracking system (0047 and 0056). Modified Huang now includes the mask requiring a fiducial marker. Huang in view of Bowling still fails to teach determining, by the system, a mask system coordinate frame with respect to the mask; determining a transform, by the system, between the face coordinate frame and the mask system coordinate frame; and storing the transform, by the system, as a spatial relationship between the facial coordinate system and the mask coordinate system, the spatial relationship configured for correct placement of the mask on the user's face. Bowling teaches an analogous robotically controlled surgical optical tracker that does teach determining a transform, by the system, between the coordinate frame of the user’s anatomy and the surgical tool coordinate frame (0112 states “The position vector and the rotation matrix that define the relation of one coordinate system to another collectively form the homogenous transformation matrix. The symbol .sub.i.sup.i-1T is the notation for the homogenous transformation matrix that identifies the position and orientation of coordinate system i with respect to coordinate system i-1. Two components of the system that have their own coordinate systems are the bone tracker 212 and the tool tracker 214. In FIG. 12 these coordinate systems are represented as, respectively, bone tracker coordinate system BTRK and tool tracker coordinate system TLTR.”) and storing the transform, by the system, as a spatial relationship between the user’s coordinate system and the surgical tools coordinate system the spatial relationship configured for correct placement of the surgical tool on the user’s respective body part (0134 states “Coordinate transformer 272 is a software module that references the data that defines the relationship between the preoperative images of the patient and the patient tracker 212. Coordinate transformer 272 also stores the data indicating the pose of the instrument energy applicator 184 relative to the tool tracker 214.”). It would have been prima facie obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to modify the mask system of modified Huang with the teachings of Bowling and include determining, by the system, a mask system coordinate frame with respect to the mask; determining a transform, by the system, between the face coordinate frame and the mask system coordinate frame; and storing the transform, by the system, as a spatial relationship between the facial coordinate system and the mask coordinate system, the spatial relationship configured for correct placement of the mask on the user's face as this generates an accurate path leading to an accurate placement of the mask on the user (0009). Regarding claim 71, Huang teaches the method of claim 70, wherein the image data further comprises image data regarding a facial fiducial marker comprised in the user's face (UWB tag 1 is located on the user’s face). Regarding claim 72, Huang teaches the method of claim 71, wherein the tracking step further comprises tracking the facial fiducial marker (page 3 states “the UWB positioning is finished according to the following steps, which lies in the three or more than three position point of UWB base station, reading the UWB tag body for three-dimensional positioning calculation;”). Allowable Subject Matter Claims 18-33 and 49-64 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: Regarding claims 18 and 49, the prior art of record fails to teach, disclose or render obvious wherein following bringing the mask into contact with the face of the user, the system commands the arm to maintain one or more of an appropriate contact force of the mask with respect to the user's face and an appropriate pose of the mask with respect to the user's face, in addition to other limitations. Huang is silent regarding this aspect as Huang mentions the operation of the mechanical arm being stopped once appropriate contact is made, however it does not further expand on the concept of the arm actively maintaining this force and position with respect to the user’s face based on a movement of a user’s face. Meger discloses an analogous PAP device (paragraph 0346) in figures 13a and 13b that does contain the use of a mask being brought into contact with a user’s face upon PAP detection, however this device is already located directly adjacent to the user’s face and comprises a spring to help make contact with the face of the user, as opposed to a robotic arm like what is disclosed in the present invention. Since the method of movement of the mask is completely different, it would not be obvious for one of ordinary skill in the art to apply this to a robotic arm that has the ability to move with multiple degrees of freedom and provide an even more complete seal. Claims 19-33 and 50-64 either depend from 18 and 49, or contain the same subject matter. Claims 66 and 73-76 would be allowable if rewritten or amended to overcome the rejection(s) under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), 2nd paragraph, set forth in this Office action. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to ROHAN DEEP PATEL whose telephone number is (571)270-5538. The examiner can normally be reached Mon - Fri 5:30 AM - 3:00 PM PST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Brandy S Lee can be reached at (571) 2707410. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ROHAN PATEL/ Examiner, Art Unit 3785 /BRANDY S LEE/ Supervisory Patent Examiner, Art Unit 3785
Read full office action

Prosecution Timeline

May 02, 2022
Application Filed
Dec 30, 2025
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12594211
WALKING ASSIST DEVICE
2y 5m to grant Granted Apr 07, 2026
Patent 12582782
DRY SALT THERAPY DEVICE WITH CONVERGING-DIVERGING NOZZLE
2y 5m to grant Granted Mar 24, 2026
Patent 12575995
WEARABLE MOTION ASSISTANCE DEVICE
2y 5m to grant Granted Mar 17, 2026
Patent 12527933
Automatic Placement of a Mask
2y 5m to grant Granted Jan 20, 2026
Patent 12508388
TUBE CONNECTOR FOR VENTILATOR SYSTEM
2y 5m to grant Granted Dec 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
57%
Grant Probability
99%
With Interview (+45.0%)
3y 7m
Median Time to Grant
Low
PTA Risk
Based on 21 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month