Prosecution Insights
Last updated: April 19, 2026
Application No. 17/780,487

APPLICATION TO GUIDE MASK FITTING

Final Rejection §101§102§103§112
Filed
May 26, 2022
Examiner
DALE, ABIGAYLE ANN
Art Unit
3785
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
ResMed
OA Round
2 (Final)
30%
Grant Probability
At Risk
3-4
OA Rounds
3y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants only 30% of cases
30%
Career Allow Rate
3 granted / 10 resolved
-40.0% vs TC avg
Strong +78% interview lift
Without
With
+77.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 9m
Avg Prosecution
42 currently pending
Career history
52
Total Applications
across all art units

Statute-Specific Performance

§101
3.7%
-36.3% vs TC avg
§103
47.9%
+7.9% vs TC avg
§102
16.2%
-23.8% vs TC avg
§112
30.5%
-9.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 10 resolved cases

Office Action

§101 §102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment This office action is in response to the amendments filed on 11/06/2025. Per the amendment, claims 1, 3-5, 7-8, 11-12, 14, 17, 20, 30-32, 35, and 39 are as currently amended; claims 2, 6, 9, 13, 18-19, 21-29, 33-34, 36-38, and 40-41 are as previously presented; claims 10, 15-16, and 42-57 are cancelled; and claims 58-60 are new. As such, claims 1-9, 11-14, 17-41, and 58-60 are pending in the instant application. Applicant’s amendments filed on 11/06/2025 overcome each and every drawing objection, claim objection, and 112(b) rejection previously set forth in the Non-Final Office Action mailed 06/13/2025. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 59 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 59 recites the limitation "subsequent to confirmation that the fit of the patient interface on the patient has been corrected or improved" in lines 3-4. There is insufficient antecedent basis for the confirmation of the fit of the patient interface on the patient in the limitation above. However, claim 58 provides antecedent basis for the limitation of claim 59 listed above. Hence, for the purpose of examination, claim 59 will be interpreted as depending from claim 58. Claim Rejections - 35 USC § 101 Claims 12-14, 17-29, and 58-60 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., an abstract idea) without significantly more. In accordance with §MPEP 2106.04, each of Claims 12-14, 17-29, and 58-60 has been analyzed to determine whether it is directed to any judicial exceptions. Step 2A, Prong 1 per MPEP 2106.04(a) Each of Claims 12-14, 17-29, and 58-60 recites at least one step or instruction for collecting and evaluating information, which is grouped as a mental process in MPEP 2106.04(a)(2)(III). Accordingly, each of Claims 12-14, 17-29, and 58-60 recite an abstract idea. Specifically, Claim 12 recites a device for guiding a patient to position and/or adjust a fitting position of a patient interface of a continuous positive air pressure (CPAP) system, the device comprising: a display; a camera; a memory; and at least one hardware processor coupled to the display, the camera, and the memory, the at least one hardware processor configured to: receive an indication identifying a type of the patient interface, wherein the patient interface is configured to engage with at least one airway of the patient and supply, to the patient, breathable gas that is received from a continuous positive air pressure (CPAP) device of the CPAP system, receive, from the camera, one or more images that include the patient with the patient interface; analyse the received one or more images to determine a fit of the patient interface on the patient based on a comparison between one or more reference images that include the patient with the patient interface, and the received one or more images that include the patient with the patient interface (judgment or evaluation, which is grouped as a mental process in MPEP §2106.04(a)(2)(III)); and based on the analysis, display, on the display, feedback for improving the fit of the patient interface on the patient (judgment or evaluation, which is grouped as a mental process in MPEP §2106.04(a)(2)(III)). Further, dependent claims 13-14, 17-29, and 58-60 merely include limitations that either further define the abstract idea (and thus don’t make the abstract idea any less abstract) or amount to no more than generally linking the use of the abstract idea to a particular technological environment or field of use because they’re merely incidental or token additions to the claims that do not alter or affect how the claimed functions/steps are performed. Accordingly, as indicated above, each of the above-identified claims recites and abstract idea as in MPEP §2106.04(a). Step 2A, Prong 2 per MPEP 2106.04(d) The above-identified abstract idea in independent Claim 12 (and its respective dependent Claims 13-14, 17-29, and 58-60) is not integrated into a practical application under MPEP 2106.04(d) because the additional elements (identified above in independent Claim 12), either alone or in combination, generally link the use of the above-identified abstract idea to a particular technological environment or field of use according to MPEP 2106.05(h) or represent insignificant extra-solution activity according to MPEP 2106.05(g). More specifically, the additional elements of: receive an indication identifying a type of the patient interface, wherein the patient interface is configured to engage with at least one airway of the patient and supply, to the patient, breathable gas that is received from a continuous positive air pressure (CPAP) device of the CPAP system; and receive, from the camera, one or more images that include the patient with the patient interface are generically recited data gathering steps in independent Claim 12 (and its respective dependent claims) which do not improve the functioning of a computer, or any other technology or technical field according to MPEP 2106.04(d)(1) and 2106.05(a). Nor do these above-identified additional elements serve to apply the above-identified abstract idea with, or by use of, a particular machine according to MPEP 2106.05(b), effect a transformation according to MPEP 2106.05(c), provide a particular treatment or prophylaxis according to MPEP 2106.04(d)(2) or apply or use the above-identified abstract idea in some other meaningful way beyond generally linking the use thereof to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception according to MPEP 2106.04(d)(2) and 2106.05(e). Furthermore, the above-identified additional elements do not add a meaningful limitation to the abstract idea because they amount to simply implementing the abstract idea on a computer in accordance with MPEP 2106.05(f). For at least these reasons, the abstract idea identified above in independent Claim 12 (and its respective dependent claims) is not integrated into a practical application in accordance with MPEP §2106.04(d). Moreover, the above-identified abstract idea is not integrated into a practical application in accordance with MPEP §2106.04(d) because the claimed method and system merely implements the above-identified abstract idea (e.g., mental process) using rules (e.g., computer instructions) executed by a computer (e.g., at least one hardware processor as claimed). In other words, these claims are merely directed to an abstract idea with additional generic computer elements which do not add a meaningful limitation to the abstract idea because they amount to simply implementing the abstract idea on a computer according to MPEP §2106.05(f). Thus, for these additional reasons, the abstract idea identified above in independent Claim 12 (and its respective dependent claims) under MPEP §2106.04(d) and MPEP §2106.05(f). Accordingly, independent Claim 12 (and its respective dependent claims) are each directed to an abstract idea according to MPEP §2106.04(d). Step 2B per MPEP 2106.05 None of Claims 12-14, 17-29, and 58-60 include additional elements that are sufficient to amount to significantly more than the abstract idea in accordance with MPEP §2106.05 for at least the following reasons. These claims require the additional elements of: a display; a camera; a memory; at least one hardware processor; receive an indication identifying a type of the patient interface, wherein the patient interface is configured to engage with at least one airway of the patient and supply, to the patient, breathable gas that is received from a continuous positive air pressure (CPAP) device of the CPAP system; and receive, from the camera, one or more images that include the patient with the patient interface. The above-identified additional elements use generically claimed computer components which enable the above-identified abstract idea(s) to be conducted by performing the basic functions of automating mental tasks. The courts have recognized such computer functions as well understood, routine, and conventional functions when claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity. See, MPEP §2106.05(d)(II) along with Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015); and OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93. In light of Applicant’s specification, the claimed terms “display”, “camera”, “memory”, and “at least one hardware processor” are reasonably construed as generic computing devices. Like SAP America vs Investpic, LLC (Federal Circuit 2018), it is clear, from the claims themselves and the specification, that these limitations require no improved computer resources, just already available technology, with their already available basic functions, to use as tools in executing the claimed process. See MPEP §2106.05(f). Furthermore, Applicant’s specification does not describe any special programming or algorithms required for the at least one hardware processor. This lack of disclosure is acceptable under 35 U.S.C. §112(a) since this hardware performs non-specialized functions known by those of ordinary skill in the computer arts. By omitting any specialized programming or algorithms, Applicant's specification essentially admits that this hardware is conventional and performs well understood, routine and conventional activities in the computer industry or arts. In other words, Applicant’s specification demonstrates the well-understood, routine, conventional nature of the above-identified additional elements because it describes these additional elements in a manner that indicates that the additional elements are sufficiently well-known that the specification does not need to describe the particulars of such additional elements to satisfy 35 U.S.C. § 112(a) (see Berkheimer memo from April 19, 2018, (III)(A)(1) on page 3). Adding hardware that performs “‘well understood, routine, conventional activit[ies]’ previously known to the industry” will not make claims patent-eligible (TLI Communications). The recitation of the above-identified additional limitations in Claims 12-14, 17-29, and 58-60 amounts to mere instructions to implement the abstract idea on a computer. Simply using a computer or other machinery in its ordinary capacity for economic or other tasks (e.g., to receive, store, or transmit data) or simply adding a general-purpose computer or computer components after the fact to an abstract idea (e.g., a fundamental economic practice or mathematical equation) does not provide significantly more. See MPEP §2106.05(f) along with Affinity Labs v. DirecTV, 838 F.3d 1253, 1262, 120 USPQ2d 1201, 1207 (Fed. Cir. 2016) (cellular telephone); and TLI Communications LLC v. AV Auto, LLC, 823 F.3d 607, 613, 118 USPQ2d 1744, 1748 (Fed. Cir. 2016) (computer server and telephone unit). Moreover, implementing an abstract idea on a generic computer, does not add significantly more, similar to how the recitation of the computer in the claim in Alice amounted to mere instructions to apply the abstract idea of intermediated settlement on a generic computer. A claim that purports to improve computer capabilities or to improve an existing technology may provide significantly more. See MPEP §2106.05(a) along with McRO, Inc. v. Bandai Namco Games Am. Inc., 837 F.3d 1299, 1314-15, 120 USPQ2d 1091, 1101-02 (Fed. Cir. 2016); and Enfish, LLC v. Microsoft Corp., 822 F.3d 1327, 1335-36, 118 USPQ2d 1684, 1688-89 (Fed. Cir. 2016). However, a technical explanation as to how to implement the invention should be present in the specification for any assertion that the invention improves upon conventional functioning of a computer, or upon conventional technology or technological processes. That is, per MPEP §2106.05(a), the disclosure must provide sufficient details such that one of ordinary skill in the art would recognize the claimed invention as providing an improvement. Here, Applicant’s specification does not include any discussion of how the claimed invention provides a technical improvement realized by these claims over the prior art or any explanation of a technical problem having an unconventional technical solution that is expressed in these claims. Instead, as in Affinity Labs of Tex. v. DirecTV, LLC 838 F.3d 1253, 1263-64, 120 USPQ2d 1201, 1207-08 (Fed. Cir. 2016), the specification fails to provide sufficient details regarding the manner in which the claimed invention accomplishes any technical improvement or solution. For at least the above reasons, the apparatus of Claims 12-14, 17-29, and 58-60 are directed to applying an abstract idea as identified above on a general purpose computer without (i) improving the performance of the computer itself or providing a technical solution to a problem in a technical field according to MPEP §2106.05(a), or (ii) providing meaningful limitations to transform the abstract idea into a patent eligible application of the abstract idea such that these claims amount to significantly more than the abstract idea itself according to MPEP §2106.04(d)(2) and §2106.05(e). Taking the additional elements individually and in combination, the additional elements do not provide significantly more. Specifically, when viewed individually, the above-identified additional elements in independent Claim 12 (and its dependent claims) do not add significantly more because they are simply an attempt to limit the abstract idea to a particular technological environment according to MPEP §2106.05(h). When viewed as a combination, these above-identified additional elements simply instruct the practitioner to implement the claimed functions with well-understood, routine and conventional activity specified at a high level of generality in a particular technological environment according to MPEP §2106.05(h). When viewed as whole, the above-identified additional elements do not provide meaningful limitations to transform the abstract idea into a patent eligible application of the abstract idea such that the claims amount to significantly more than the abstract idea itself according to MPEP §2106.04(d)(2) and §2106.05(e). Moreover, neither the general computer elements nor any other additional element adds meaningful limitations to the abstract idea because these additional elements represent insignificant extra-solution activity according to MPEP §2106.05(g). As such, there is no inventive concept sufficient to transform the claimed subject matter into a patent-eligible application as required by MPEP §2106.05. Therefore, for at least the above reasons, none of the Claims 12-14, 17-29, and 58-60 amounts to significantly more than the abstract idea itself. Accordingly, Claims 12-14, 17-29, and 58-60 are not patent eligible and rejected under 35 U.S.C. §101. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-6, 8, 11-14, 18, 21-22, 24, 26-41, 58, and 60 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Lawrenson & Nolan (US 20170203071 A1). Regarding claim 1, Lawrenson discloses A respiratory pressure therapy system (10, 14, 16, 18, 28; Fig. 1) for providing continuous positive air pressure (CPAP) (14, 16, 18; Fig. 1) to a patient via a patient interface (18; Fig. 1) configured to engage with at least one airway of the patient ([0090], lines 3-7), the system comprising: a flow generator (16; Fig. 1) configured to generate a supply of breathable gas for delivery to the patient via the patient interface ([0091], lines 4-6), wherein the breathable gas is output from the flow generator ([0091], lines 4-5) at a pressure level that is above an atmospheric pressure (inherent to the definition of continuous positive air pressure); at least one sensor (Camera 42, microphone 44, distance sensor 46, patient interface sensor 48; Figs. 2, 3) that is configured to measure a physical quantity while the breathable gas is supplied to the patient ([0097], lines 1-10; [0104], lines 9-19); a display (52; Fig. 1); and a computing device (10; Fig. 1) including memory (38, 40; Fig. 1; [0094], lines 3-17) and at least one hardware processor ([0099], lines 1-6), the computing device (10; Fig. 1) configured to: receive, from the at least one sensor (Camera 42, microphone 44, distance sensor 46, patient interface sensor 48; Figs. 2, 3), sensor data that is based on a measured physical property of the supply of breathable gas (receiving unit 32 is a part of system 10, [0094], lines 2-3; receiving unit 32 receives data based on measured physical property of supply of breathable gas from sensors 42, 44, 46, and 48, [0097], lines 1-7); control, based on the received sensor data (data received from sensors 42, 44, 46, 48), the flow generator (16) to adjust a property of the supply of breathable gas that is delivered to the patient ([0104], lines 5-8); receive an input indicating assistance is needed with using the patient interface ([0101], lines 31-37); receive one or more images that include the patient with the patient interface (system 10 receives one or more images of the patient 12 wearing the patient interface 18 via receiving unit 32, see [0101], lines 1-6); analyse the received one or more images (one or more images of patient 12 wearing patient interface 18 received by receiving unit 32, see [0101], lines 1-6) based on a comparison between: 1) one or more reference images that include the patient with the patient interface (optimal setting, where the optimal setting includes a virtual attachment of the 3D model of the patient interface to the 3D model of the patient’s head, see [0038], lines 13-20), and 2) the received one or more images that include the patient with the patient interface (received images are analyzed by comparing the received images to the optimal setting, see [0101], lines 38-41, where the orientation and/or position of the patient interface is the received images); and based on the analysis, display, on the display (52), instructions for positioning the patient interface ([0099], lines 29-33). Regarding claim 2, Lawrenson discloses the invention as set forth in claim 1, wherein the one or more reference images (optimal setting, where the optimal setting includes a virtual attachment of the 3D model of the patient interface to the 3D model of the patient’s head, see [0038], lines 13-20) including the patient with the patient interface are stored in the memory (38 40; [0094], lines 3-17, where databases 38 and 40 are one common database). Regarding claim 3, Lawrenson discloses the invention as set forth in claim 1, wherein the computing device (10) is coupled to a camera (42; Figs. 2, 3) configured to capture the one or more reference images ([0095], lines 9-11). Regarding claim 4, Lawrenson discloses the invention as set forth in claim 1, wherein the computing device (10) is configured to compare the one or more reference images and the received one or more images ([0101], lines 38-41; see claim 1 above) to determine an improper fit position of the patient interface fitting position ([0099], lines 19-29, where the usage data is the received one or more images of the patient 12 wearing the patient interface 18). Regarding claim 5, Lawrenson discloses the invention as set forth in claim 1, wherein the computing device (10) is configured to superimpose a correct position of the patient interface on the received one or more images ([0082], lines 1-9), wherein the displayed instructions include the one or more superimposed images (Fig. 1; [0099], lines 29-33). Regarding claim 6, Lawrenson discloses the invention as set forth in claim 1, further comprising a remote computing system (32, 34, 36, 38, 40; [0094], last sentence; [0070], lines 2-9; [0106], lines 13-23) and the remote computing system is configured to determine the instructions for positioning the patient interface, and transmit the instructions to the computing device ([0099], lines 27-33, where the system 10 includes advice unit 36 and display 52). Regarding claim 8, Lawrenson discloses the invention as set forth in claim 1, wherein the computing device (10) is further configured to receive a second input indicating a type of the patient interface (images of patient with patient interface are received by system 10, [0101], lines 1-8 and [0101], lines 24-27; images of patient with patient interface received by 10 indicate the type of patient interface, [0049], lines 25-28), and display, on the display (52), instructions for using the type of patient interface indicated by the second input ([0049], lines 31-34). Regarding claim 11, Lawrenson further discloses a non-transitory computer readable storage medium (solid-state medium; [0109], lines 1-4) storing instructions for use (computer program, see [0109], lines 1-4) with a computing device (10), the computing device configured to control a continuous positive air pressure (CPAP) (14, 16, 18; Fig. 1) device configured to generate the supply of breathable gas that is delivered to a patient via a patient interface configured to engage with at least one airway of the patient ([0090], lines 3-7), wherein the breathable gas is output from a flow generator (16; Fig. 1; [0091], lines 4-5) at a pressure level that is above atmospheric pressure (inherent to the definition of continuous positive pressure), the CPAP device associated with at least one sensor (camera 42, microphone 44, distance sensor 46, patient interface sensor 48; Figs. 2, 3) that is configured to measure a physical quantity while the breathable gas is supplied to the patient ([0097], lines 1-10; [0104], lines 9-19), the computing device (10) including at least one hardware processor ([0099], lines 1-6), the stored instructions (computer program, see [0109], lines 1-4) comprising instructions (computer program stored instructions for 32, 34, 36, 38, and 40; [0094], lines 1-12) that are configured to cause the computing device (10) to: receive, from the at least one sensor (camera 42, microphone 44, distance sensor 46, patient interface sensor 48; Figs. 2, 3), sensor data that is based on measured physical property of the supply of breathable gas (receiving unit 32 is a part of system 10, [0094], lines 2-3; receiving unit 32 receives data based on measured physical property of supply of breathable gas from sensors 42, 44, 46, and 48, [0097], lines 1-7); control, based on the received sensor data (data received from sensors 42, 44, 46, 48), the flow generator (16) to adjust a property of the supply of breathable gas that is delivered to the patient ([0104], lines 5-8); receive, an input indicating assistance is needed with using the patient interface ([0101], lines 31-37); receive one or more images that include the patient with the patient interface (system 10 receives one or more images of the patient 12 wearing the patient interface 18 via receiving unit 32, see [0101], lines 1-6); analyse the received one or more images (one or more images of patient 12 wearing patient interface 18 received by receiving unit 32, see [0101], lines 1-6) based on a comparison between one or more reference images that include the patient with the patient interface (optimal setting, where the optimal setting includes a virtual attachment of the 3D model of the patient interface to the 3D model of the patient’s head, see [0038], lines 13-20), and the received one or more images that include the patient with the patient interface (received images are analyzed by comparing the received images to the optimal setting, see [0101], lines 38-41, where the orientation and/or position of the patient interface is the received images); and based on the analysis, display, on the display (52), instructions for positioning the patient interface ([0099], lines 29-33). Regarding claim 12, Lawrenson further discloses a device (10; Fig. 1) for guiding a patient to position and/or adjust a fitting position of a patient interface of a continuous positive air pressure (CPAP) system ([0099], lines 19-39), the device (10; Fig. 1) comprising: a display (52; Fig. 1); a camera (42; Fig. 1); a memory (38, 40; Fig. 1; [0094], lines 3-17); and at least one hardware processor ([0099], lines 1-6) coupled to the display (52, see Fig. 2), the camera (42, see Fig. 2), and the memory (38, 40, see Fig. 2), the at least one hardware processor ([0099], lines 1-6) configured to: receive an indication identifying a type of the patient interface ([0101], lines 31-35, where the image analysis unit 58 receives images of the patient with the patient interface that indicate the type of patient interface, see [0049], lines 25-28), wherein the patient interface (18; Fig. 1) is configured to engage with at least one airway of the patient ([0090], lines 3-7) and supply, to the patient, breathable gas that is received from a continuous positive air pressure (CPAP) device of the CPAP system ([0091], lines 4-6); receive, from the camera (42), one or more images that include the patient with the patient interface (one or more images of the patient 12 wearing the patient interface 18 taken via camera 42, see [0101], lines 1-6); analyse the received one or more images (images taken by camera 42 of the patient 12 wearing the patient interface 18, see [0101], lines 1-6) to determine a fit of the patient interface on the patient based on a comparison between one or more reference images that include the patient with the patient interface (optimal setting, where the optimal setting includes a virtual attachment of the 3D model of the patient interface to the 3D model of the patient’s head, see [0038], lines 13-20), and the received one or more images that include the patient with the patient interface (received images are analyzed by comparing the received images to the optimal setting, see [0101], lines 38-41, where the orientation and/or position of the patient interface is the received images); and based on the analysis, display, on the display (52), feedback for improving the fit of the patient interface on the patient ([0099], lines 29-33). Regarding claim 13, Lawrenson discloses the invention as set forth in claim 12, wherein the received indication identifying the type of the patient interface ([0101], lines 31-35, where the image analysis unit 58 receives images of the patient with the patient interface that indicate the type of patient interface, see [0049], lines 25-28) is a user input (users take image using camera 42, [0101], lines 1-3; images from camera 42 are received as input by receiving unit 32, [0101], lines 5-8). Regarding claim 14, Lawrenson discloses the invention as set forth in claim 12, wherein the received indication identifying the type of the patient interface ([0101], lines 31-35, where the image analysis unit 58 receives images of the patient with the patient interface that indicate the type of patient interface, see [0049], lines 25-28) is determined, by the processing system (advice unit 36), based on an image that includes the patient interface (received images taken by camera 42 include the patient 12 wearing the patient interface, see [0101], lines 1-6). Regarding claim 18, Lawrenson discloses the invention as set forth in claim 12, wherein the analysis includes extracting, from the received one or more images (images from camera 42), one or more indicators (64; Fig. 1) included in patient interface (see Fig. 1). Regarding claim 21, Lawrenson discloses the invention as set forth in claim 12, wherein displaying the feedback includes displaying at least one of the received images ([0099], lines 29-33), and the processing system (advice unit 36) is configured to include, in the displayed received images, one or more visual indicators indicating locations on the patient interface where the fit of the patient interface can be improved (arrow and instructions 56 on display 52, see Fig. 1; [0099], lines 29-33). Regarding claim 22, Lawrenson discloses the invention as set forth in claim 12, wherein the processing system (advice unit 36) is configured to display instructions for using the identified type of patient interface ([0049], lines 31-34; [0049], lines 50-53). Regarding claim 24, Lawrenson discloses the invention as set forth in claim 12, wherein the analysis includes extracting features from the received one or more images (first reference location, second reference location; [0101], lines 10-15), and comparing positions and/or orientations of the features to a three-dimensional model of the patient interface ([0101], lines 31-37). Regarding claim 26, Lawrenson discloses the invention as set forth in claim 12, wherein the device (10) is a mobile phone, tablet or remote control (Fig. 1; [0093]). Regarding claim 27, Lawrenson discloses the invention as set forth in claim 12, wherein the processing system (advice unit 36) is further configured to receive sensor data from one or more sensors (receive data from sensors 48; Fig. 1) disposed on a surface of or in the patient interface (18, see Fig. 1; [0097], lines 12-14). Regarding claim 28, Lawrenson discloses the invention as set forth in claim 27, wherein the processing system (advice unit 36) is further configured to perform the analysis ([0098], lines 1-7) and display the feedback based on data received from the sensors ([0098], lines 7 to end of paragraph). Regarding claim 29, Lawrenson discloses the invention as set forth in claim 27, wherein the sensor data (data received from sensor 48) is compared to pre-set sensor values (threshold value, [0105], lines 25-27) stored in the memory (38; database 38 stores technical data relation to the pressure support system 14 and its components, hence a gas parameter’s threshold value detected by sensor 48 would be stored in database 38, Abstract). Regarding claim 30, Lawrenson further discloses a non-transitory computer readable storage medium (solid-state medium; [0109], lines 1-4) storing instructions for use (computer program, see [0109], lines 1-4) with a computing device (10), the stored instructions (computer program, see [0109], lines 1-4) comprising instructions (computer program stored instructions for 32, 34, 36, 38, and 40; [0094], lines 1-12) that are configured to cause the computing device (10) to: receive an indication identifying a type of a patient interface ([0101], lines 31-35, where the image analysis unit 58 receives images of the patient with the patient interface that indicate the type of patient interface, see [0049], lines 25-28), configured to engage with at least one airway of a patient ([0090], lines 3-7) and supply breathable gas received from a continuous positive air pressure (CPAP) device to the patient ([0091], lines 4-6); receive, from a camera (42), one or more images that include a patient with the patient interface (system 10 receives one or more images of the patient 12 wearing the patient interface 18 via receiving unit 32, where the one or more images are taken by camera 42, see [0101], lines 1-6); analyse the received one or more images (one or more images of patient 12 wearing patient interface 18 taken by camera 42, see [0101], lines 1-6) to determine fit of the patient interface on the patient based on a comparison between: 1) one or more reference images that include (optimal setting, where the optimal setting includes a virtual attachment of the 3D model of the patient interface to the 3D model of the patient’s head, see [0038], lines 13-20) the patient with the patient interface, and 2) the received one or more images that include the patient with the patient interface (received images are analyzed by comparing the received images to the optimal setting, see [0101], lines 38-41, where the orientation and/or position of the patient interface is the received images); and based on the analysis, display, on the display (52), feedback for improving the fit of the patient interface on the patient ([0099], lines 29-33). Regarding claim 31, Lawrenson discloses a respiratory pressure therapy system (10, 14, 16, 18, 28; Fig. 1) for providing continuous positive air pressure (CPAP) (14, 16, 18; Fig. 1) to a patient via a patient interface (18; Fig. 1) including one or more patient interface sensors (48; Figs. 1, 2) and configured to engage with at least one airway of the patient ([0090], lines 3-7), the system comprising: a flow generator (16; Fig. 1) configured to generate a supply of breathable gas for delivery to the patient via the patient interface ([0091], lines 4-6), wherein the breathable gas is output from the flow generator ([0091], lines 4-5) at a pressure level that is above atmospheric pressure (inherent to the definition of continuous positive air pressure); a display (52; Fig. 1); and a computing device (10; Fig. 1) including memory (38, 40; Fig. 1; [0094], lines 3-17) and at least one hardware processor ([0099], lines 1-6), the computing device configured to: control the flow generator (16) to adjust a property of the supply of breathable gas that is delivered to the patient (controlled via data received from sensors 42, 44, 46, and 48, [0104], lines 5-8); receive sensor data (data from sensor 48) from the one or more patient interface sensors (48) comprising a connection sensor (extensometer, strain gauge; [0104], lines 9-14) configured to detect a connection between two components of the patient interface ([0104], lines 14-18, where the measure of a tensile stress or force occurring withing the headgear straps 30 can detect a connection between the headgear straps 30 and mask shell 22); compare the received sensor data to pre-set values in the memory (advice unit 36 of system 10 compares the received data from sensor 48 to pre-set values of an optimal setting, where the information to determine the optimal setting is stored in databases 38 and 40, where data bases 38 and 40 are one database; [0070], [0092], [0094], lines 3-17, and [0099], lines 19-23); based on the comparison, display, on the display (52; Fig. 1), instructions for positioning the patient interface ([0099], lines 24 to the end of the paragraph); subsequent to display of the instructions for positioning the patient interface, receive further sensor data from one or more sensors disposed with the patient interface (the sensor 48 acquires current usage data and is disposed with the patient interface 18, see [0097], hence the sensor 48 is always acquiring usage data and transmitting the usage data to the receiving unit 32 of system 10; therefore, the system 10 is capable of receiving further sensor data from the sensor 48 subsequent to the display of instructions for positioning the patient interface 18; see also [0104], lines 1-7); and automatically validating that the fit of the patient interface on the patient has been corrected based on the sensor data (the sensor 48 acquires current usage data and is disposed with the patient interface 18, see [0097], hence the sensor 48 is always acquiring usage data and transmitting the usage data to the receiving unit 32 of system 10. Additionally, the advice unit 36 compares the current data from the sensor 48 with the optimal setting to generate personalized advice, where if the current setting of the current data from sensor 48 deviates from the optimal setting, the advice unit 36 will automatically be triggered and will generate and output the personalized advice, see [0099], lines 24-30 and [0105], lines 9 to the end of the paragraph, hence the corrected position and orientation of the patient interface on the patient is automatically validated). Regarding claim 32, Lawrenson discloses the invention as set forth in claim 31, wherein the memory (38, 40; Fig. 1; [0094], lines 3-17) includes one or more images including a patient with the patient interface ([0101], lines 1-8; [0101], lines 24-27), wherein the displayed instructions include displaying the one or more images based on the comparison (Fig. 1; [0099], lines 29-33). Regarding claim 33, Lawrenson discloses the invention as set forth in claim 31, wherein the computing device (10) is coupled to a camera (42; Figs. 2, 3) configured to capture one or more images of the patient with the patient interface (one or more images of the patient 12 wearing the patient interface 18 are taken by camera 42, see [0101], lines 1-6). Regarding claim 34, Lawrenson discloses the invention as set forth in claim 33, wherein the computing device (10) is configured to compare one or more reference images (optimal setting, where the optimal setting includes a virtual attachment of the 3D model of the patient interface to the 3D model of the patient’s head, see [0038], lines 13-20) stored in memory (38 and 40, see [0094], lines 3-17; information to identify the optimal setting is stored in database 38 and 40, see [0099], lines 11-15, where database 38 and 40 are one singular database) and the captured one or more images (one or more images of patient 12 wearing patient interface 18 taken by camera 42, see [0101], lines 1-6) to determine improper patient interface fitting position (received images are analyzed by comparing the received images to the optimal setting, see [0101], lines 38-41, where the orientation and/or position of the patient interface is the received images; the comparison determines if the current position and/or orientation of the patient interface 18 on the patient 12 is proper or improper, see [0099], lines 11-29). Regarding claim 35, Lawrenson discloses the invention as set forth in claim 34, wherein the computing device (10) is configured to superimpose a correct position of the patient interface on the captured one or more images ([0082], lines 1-9), wherein the displayed instructions include the one or more superimposed images (Fig. 1; [0099], lines 29-33). Regarding claim 36, Lawrenson discloses the invention as set forth in claim 31, further comprising a remote computing system (32, 34, 36, 38, 40; [0094], last sentence; [0070], lines 2-9; [0106], lines 13-23) and the remote computing system is configured to determine the instructions for positioning the patient interface, and transmit the instructions to the computing device ([0099], lines 27-33, where the system 10 includes advice unit 36 and display 52). Regarding claim 37, Lawrenson discloses the invention as set forth in claim 31, wherein the computing device (10) is further configured to receive an input indicating a type of the patient interface (images of patient with patient interface are received by system 10, [0101], lines 1-8 and [0101], lines 24-27; images of patient with patient interface received by 10 indicate the type of patient interface, [0049], lines 25-28), and display, on the display (52), instructions for using the type of patient interface indicated by the input ([0049], lines 31-34). Regarding claim 38, Lawrenson discloses the invention as set forth in claim 31, wherein at least one of the one or more patient interface sensors (48) is disposed on a surface of the patient interface ([0097], lines 12-14). Regarding claim 39, Lawrenson discloses the invention as set forth in claim 31, wherein at least one of the one or more patient interface sensors (48) is a pressure sensor (48 includes a pressure sensor, [0104], lines 8-21) configured to a measure pressure inside of a mask of the patient interface ([0104], lines 18-21). Regarding claim 40, Lawrenson discloses the invention as set forth in claim 31, wherein at least one of the one or more patient interface sensors (48) is a sensor configured to measure a physical property of the supply of breathable gas ([0097], lines 1-7) in an oral or nasal cushion of the patient interface ([0090], lines 3-8; 48 is within nasal cushion element 20, [0104], lines 18-21). Regarding claim 41, Lawrenson discloses the invention as set forth in claim 31, wherein at least one of the one or more patient interface sensors (48) is disposed on a surface or inside of a strap of the patient interface ([0097], lines 12-14). Regarding claim 58, Lawrenson discloses the invention as set forth in claim 12, wherein the at least one hardware processor ([0099], lines 1-6) is further configured to: subsequent to display of the instructions for positioning the patient interface, receive sensor data from one or more sensors disposed with the patient interface (the sensor 48 acquires current usage data and is disposed with the patient interface 18, see [0097], hence the sensor 48 is always acquiring usage data and transmitting the usage data to the receiving unit 32 of system 10; therefore, the system 10 is capable of receiving further sensor data from the sensor 48 subsequent to the display of instructions for positioning the patient interface 18; see also [0104], lines 1-7); and automatically validating that the fit of the patient interface on the patient has been corrected based on the sensor data (the sensor 48 acquires current usage data and is disposed with the patient interface 18, see [0097], hence the sensor 48 is always acquiring usage data and transmitting the usage data to the receiving unit 32 of system 10. Additionally, the advice unit 36 compares the current data from the sensor 48 with the optimal setting to generate personalized advice, where if the current setting of the current data from sensor 48 deviates from the optimal setting, the advice unit 36 will automatically be triggered and will generate and output the personalized advice, see [0099], lines 24-30 and [0105], lines 9 to the end of the paragraph, hence the corrected position and orientation of the patient interface on the patient is automatically validated). Regarding claim 60, Lawrenson discloses the invention as set forth in claim 12, wherein the at least one hardware processor ([0099], lines 1-6) is further configured to display, on the display (52; Fig. 1), an image that includes the patient wearing the patient inface, wherein the feedback for improving the fit of the patient interface is overlaid with the image ([0099], lines 29-34). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 7, 9, and 25 are rejected under 35 U.S.C. 103 as being unpatentable over Lawrenson & Nolan (US 20170203071 A1) in view of Lucey et al. (US 20170173289 A1). Regarding claim 7, Lawrenson discloses the invention as set forth in claim 6, wherein the remote computing system (32, 34, 36, 38, 40; [0094], last sentence; [0070], lines 2-9; [0106], lines 13-23) is configured to: receive, from the computing device, the one or more images ([0101], lines 1-8; [0101], lines 24-27), but fails to disclose wherein the remote computing system is configured to: train a machine learning model for instructing correct positioning of a patient interface, wherein the instructions for positioning the patient interface are determined based on the trained machine learning model. However, Lucey teaches the making of a virtual 3D model of a user’s face, wherein the model is created via the identification of a plurality of points associated with landmark features on the user’s face from a plurality of video frames, or images, based on a machine learning algorithm ([0025], last sentence; [0040], lines 1-5). The machine learning algorithm can be trained to recognize movement of one or more points, or landmarks, on a user’s face based on a larger training dataset, containing a plurality of three-dimensional surfaces representative of a plurality of users’ faces, that is stored in the memory of the computing system ([0127], lines 3-9). The machine learning algorithm is used to generate a support vector or other supervised learning model to classify and recognize points or landmarks on the user’s face ([0129]). Therefore, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Lawrenson with Lucey such that the remote computing system (32, 34, 36, 38, 40; [0094], last sentence; [0070], lines 2-9; [0106], lines 13-23) is configured to: train a machine learning model for instructing correct positioning of a patient interface (Lucey: [0127], lines 3-9; [0129]), wherein the instructions for positioning the patient interface are determined based on the trained machine learning model ([0102], last two sentences, where the model is received by the advice unit 36 and the analysis of the model compared to the one or more images of the patient with the patient interface results in instructions for correcting the positioning of the patient interface, [0101], lines 31-36, where the model is the machine learning model of Lucey et al, [0127], lines 3-9; [0129]) to enhance the tracking and identification of the user’s face and the patient interface (Lucey: [0127], lines 1-6). Regarding claim 9, Lawrenson, as modified by Lucey (see above), teaches the invention as set forth in claim 1, wherein the computing device (10) is further configured to receive an input indicating a type of the patient interface (images of patient with patient interface are received by system 10, [0101], lines 1-8 and [0101], lines 24-27; images of patient with patient interface received by 10 indicate the type of patient interface, [0049], lines 25-28), and display, on the display (52), instructions for using the type of patient interface indicated by the input ([0049], lines 31-34). Regarding claim 25, Lawrenson, as modified by Lucey (see above), teaches the invention as set forth in claim 12, wherein the analysis includes comparing the received one or more images (images from camera 42) to a machine trained model (Lucey: support vector machine or other supervised learning model generated by machine learning algorithm; [0129]) that is updated based on data received from other patients (Lucey: [0127], lines 3-9). Claim 17 is rejected under 35 U.S.C. 103 as being unpatentable over Lawrenson & Nolan (US 20170203071 A1). Regarding claim 17, Lawrenson discloses the invention set forth in claim 12, but fails to explicitly disclose wherein the one or more reference images (virtual 3D model of patient’s head; [0095], lines 4-6) include a plurality of reference points, and the analysis includes, detecting reference points in the received one or more images and comparing the detected reference points to the plurality of reference points in the one or more reference images. However, Lawrenson does disclose one or more received images from the camera (42) have reference locations (first reference location, second reference location), where the image analysis unit (58) can identify the orientation and/or position of the patient interface (18) relative to the head of the patient via an image matching algorithm that utilizes the first and second reference locations for comparison and analysis ([0101], lines 10-15). Lawrenson further discloses that the image analysis unit (58) comprises one or more program modules that form part of a software application that also carries out the function of the advice unit (36; [0101], lines 24-27), which analyzes and compares received one or more images from the camera (42) to one or more reference images (virtual 3D model of patient’s head; [0095], lines 4-6). Therefore, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to further modify Lawrenson, such that the one or more reference images (virtual 3D model of patient’s head; [0095], lines 4-6) include a plurality of reference points (first location, second location), and the analysis includes, detecting reference points in the received one or more images and comparing the detected reference points to the plurality of reference points in the one or more reference images ([0101], lines 10-15) to improve accuracy of determining the orientation and/or position of the patient interface relative to the head of the patient by being able to easily identify features on the patient interface and the head of the patient ([0101], lines 13-21). Claims 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Lawrenson & Nolan (US 20170203071 A1) in view of Richard et al. (US 20060235877 A1). Regarding claim 19, Lawrenson discloses the invention set forth in claim 12, but fails to explicitly disclose wherein the received one or more images (images from camera 42) are captured from different positions and orientations of the camera. However, Richard teaches a camera provided to take images of the patient from various angles for the measurement of various dimensions of the patient’s head ([0034], lines 1-5). Therefore, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to further modify Lawrenson with Richard, such that the received one or more images (images from camera 42) are captured from different positions and orientations of the camera (Richard: [0034], lines 1-5) to measure and capture various dimensions of the patient’s head (Richard: [0034], lines 5-6). Regarding claim 20, Lawrenson discloses the invention set forth in claim 12, but fails to explicitly disclose wherein a plurality of images are captured at different times, and the analysis includes comparing the plurality of images to determine changes in positioning of the patient interface between images captured at the different times. However, Richard teaches a plurality of images (full facial scans including multiple images combined, or stitched, together) taken over a period of time which can be used to show how the patient’s face has changed. Richard further teaches Delta View Programs that can compare full facial scans from different times to show geometrically how the patient’s face has changed and the specific areas where the most change has occurred ([0145], lines 1-9). Therefore, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to further modify Lawrenson with Richard, such that a plurality of images (full facial scans including multiple images combined, or stitched, together) are captured at different times, and the analysis includes comparing the plurality of images to determine changes in positioning of the patient interface between images captured at the different times (Richard: [0145], lines 1-9) as changes in patient weight can affect the fit of a respiratory mask (Richard: [0145], lines 4-6). With the information from the analysis above, a recommendation could be made as to whether the patient requires a new mask size or different type of mask (Richard: [0145], lines 9-11). Claim 23 is rejected under 35 U.S.C. 103 as being unpatentable over Lawrenson & Nolan (US 20170203071 A1) in view of Ho et al. (US 20150193650 A1). Regarding claim 23, Lawrenson discloses the invention set forth in claim 12, but fails to explicitly disclose wherein the analysis includes comparing the received one or more images (images from camera 42, see claim 12 above) to models generated based on information received from other patients. However, Ho teaches a patient interface identification system for identifying a patient interface that is suited for a face of a user that includes a database (22) for storing reference pictures including the faces of other users ([0066], lines 1-3). Ho further teaches a processing unit (24) that analyses a received image of a user and compares said received image to the stored reference pictures ([0066], lines 8-14). Therefore, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to further modify Lawrenson with Ho, such that the analysis includes comparing the received one or more images (images from camera 42, see claim 12 above) to models generated based on information received from other patients (Ho: [0066], lines 1-3; [0066], lines 8-14) to determine advice on how to improve the position and/or orientation of the patient interface based on the analysis above (Ho: [0066], lines 12-14). Claim 59 is rejected under 35 U.S.C. 103 as being unpatentable over Lawrenson & Nolan (US 20170203071 A1) in view of Viner et al. (WO 2019043578 A1). Regarding claim 59, Lawrenson discloses the invention as set forth in claim 58, but fails to explicitly disclose wherein the at least one hardware processor ([0099], lines 1-6) is further configured to: subsequent to confirmation that the fit of the patient interface on the patient has been corrected or improved, capture one or more additional images of the patient with the patient interface; and store the one or more additional images as replacement or additional of the one or more reference images. However, Viner teaches an analogous system to guide a patient to adjust a fitting position of a respiratory mask (Abstract), where the respiratory sensor system (300) confirms the respiratory mask is positioned correctly when a measured aerosol leakage value drops below and accepted threshold, wherein a wireless sensor is subsequently automated to capture an image of the respiratory mask in the correct position on the user’s face, where the captured image is saved to be used as a reference image in the future (see pg. 20, lines 7-11). Therefore, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Lawrenson with above process taught by Viner, such that the at least one hardware processor ([0099], lines 1-6) is further configured to: subsequent to confirmation that the fit of the patient interface on the patient has been corrected or improved, capture one or more additional images of the patient with the patient interface (Viner: pg. 20, lines 7-10); and store the one or more additional images as replacement (Viner: pg. 20, line 11) to ensure a consistent fit and positioning of the patient interface on the patient every time the patient dons the patient interface (Viner: pg. 20, line 12). Response to Arguments Applicant's arguments filed 11/06/2025 have been fully considered but they are not persuasive. On pages 18-19 of Remarks filed on 11/06/2025, Applicant states it is unclear what “fact” is being officially noticed by the in the 35 U.S.C 101 rejection of Claim 12 as there is no clear identification of what is considered “additional steps/elements” (see pg. 10 of Office Action mailed 06/13/2025). The Examiner intended to identify the “additional steps/elements” as the additional elements of a camera, a display, a memory, and at least one hardware processor (see “[T]he claim includes additional elements…automating mental tasks[.]” on pages 9-10 of the Office Action mailed 06/13/2025); however, the Examiner acknowledges the use of the Official notice led to confusion and lack of clarity of the analysis of the additional elements recited in Claim 12. The Examiner has provided a revised rejection of Claims 12-14, 17-29, and newly added Claims 58-60 under 35 U.S.C. §101 to clarify the analysis of Claims 12-14, 17-29, and 58-60 under the structure of the Alice two-part test (see the 101 rejection of Claims 12-14, 17-29, and 58-60 above). On page 20, Applicant argues Claim 12 is not directed to an abstract idea as the claim is directed to a specific technological device for guiding a patient to position and/or adjust a fitting position of a patient interface of a continuous positive air pressure (CPAP) system to provide an improvement in CPAP therapy technology. However, Claim 12 is directed toward an abstract idea as the claim recites a generically claimed “device” to position and/or adjust a fitting position of a patient interface of a continuous positive air pressure (CPAP) system and not a CPAP therapy system itself. Additionally, “[a] device for guiding a patient to position and/or adjust a fitting position of a patient interface of a continuous positive air pressure (CPAP) system” is recited in the preamble of Claim 12. MPEP §2111.02 states “[t]he claim preamble must be read in the context of the entire claim[.]” and “ [i]f the body of a claim fully and intrinsically sets forth all of the limitations of the claimed invention, and the preamble merely states, for example, the purpose or intended use of the invention, rather than any distinct definition of any of the claimed invention’s limitations, then the preamble is not considered a limitation and is of no significance to claim construction.” Hence, the preamble of Claim 12 is not considered a limitation and does not integrate the abstract ideas recited in Claim 12 (“analyse the received…with the patient interface” and “based on the analysis…on the patient”) into a practical application (see 101 analysis and rejection of Claim 12 above). Additionally, newly added claims 58-60 have been analyzed and determined to recite a judicial exception and are directed toward an abstract idea (i.e., mental process) as the claims recite at least one step or instruction for collecting and evaluating information, which is grouped as a mental process (see 101 rejection of Claims 12-14, 17-29, and 58-60 above). On pages 22-23, Applicant argues Lawrenson fails to anticipate independent Claims 1, 11, 12, and 30 as Lawrenson does not disclose a comparison that includes one or more reference images that include the patient with the patient interface and one or more received images that include the patient with the patient interface. However, Lawrenson does disclose comparing one or more received images, where the received images include the patient wearing the patient interface (see [0101], lines 1-6), being compared to the optimal setting (see [0101], lines 38-41, where the orientation and/or position of the patient interface is the received one or more images as the orientation and/or position of the patient interface is determined from the one or more received images), where the optimal setting includes a virtual attachment of the 3D model of the patient interface to the 3D model of the patient’s head (see [0038], lines 13-20). Therefore, Lawrenson does disclose a comparison that includes one or more reference images that include the patient with the patient interface and one or more received images that include the patient with the patient interface (see 102 rejection of Claims 1, 11, 12, and 30 above). On page 24, Applicant argues Lawrenson does not disclose acquiring sensor data subsequent to display of the instruction and then automatically validating the fit of the patient interface based on such data. However, Lawrenson does disclose at least one sensor (48) acquiring current usage data (see [0097]), hence the at least one sensor (48) is always acquiring usage data, such that the at least one sensor (48) is acquiring usage data at a current point in time, and transmitting the usage data to the receiving unit (32); therefore, the receiving unit (32) is always up-to-date with the current usage data from the at least one sensor (48). Thus, the system (10) constantly and consistently receives data from the at least one sensor (48), no matter if the position/orientation of the patient interface on the patient is correct or incorrect. Therefore, the at least one sensor (48) disclosed by Lawrenson is capable of transmitting usage data to be received by the receiving unit (32) of the system (10) after the display of the instructions (see also [0104], lines 1-7 and the 102 rejection of claim 31 above). Additionally, Lawrenson discloses the advice unit (36) is automatically triggered to generate and output personalized advice when the usage data analysis unit (34) determines a malfunction and/or misalignment of the patient interface (see 0077], [0079], [0099], lines 24-30, and [0105], lines 9 to the end of the paragraph). Hence, the respiratory pressure therapy system disclosed by Lawrenson (10, 14, 16, 18, 28; Fig. 1) is capable of automatically validating the position and/or orientation of the patient interface on the patient based on usage data from the at least one sensor (48), as the system (10) will continuously check the position and/or orientation of the patient interface on the patient using current data from the at least one sensor (48), and will only generate and output advice on how to adjust the position and/or orientation of the patient interface if a value of a measured parameter measured by the at least one sensor (48) exceeds a certain pre-determined threshold value (see [0105] and the 102 rejection of claim 31 above). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Fu et al. (WO 2017000031 A1): Regarding analysis data of an image captured with a camera and comparing the captured image with a reference image. Hudson et al. (US 20200121873 A1): Regarding capturing an image of a patient wearing a patient interface and steps to adjust the patient interface on the patient for a proper fit. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ABIGAYLE DALE whose telephone number is (571)272-1080. The examiner can normally be reached Monday-Friday from 8:45am to 5:45pm ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Brandy Lee can be reached at (571) 270-7410. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ABIGAYLE DALE/Examiner, Art Unit 3785 /BRANDY S LEE/Supervisory Patent Examiner, Art Unit 3785
Read full office action

Prosecution Timeline

May 26, 2022
Application Filed
Jun 11, 2025
Non-Final Rejection — §101, §102, §103
Oct 28, 2025
Interview Requested
Nov 04, 2025
Examiner Interview Summary
Nov 06, 2025
Response Filed
Jan 22, 2026
Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12502499
ANESTHETIC GAS DISTRIBUTION DEVICE
2y 5m to grant Granted Dec 23, 2025
Study what changed to get past this examiner. Based on 1 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
30%
Grant Probability
99%
With Interview (+77.8%)
3y 9m
Median Time to Grant
Moderate
PTA Risk
Based on 10 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month