Prosecution Insights
Last updated: April 19, 2026
Application No. 18/723,814

VIRTUAL MASK WEARING METHOD AND APPARATUS, TERMINAL DEVICE, AND READABLE STORAGE MEDIUM

Non-Final OA §102§103
Filed
Jun 24, 2024
Examiner
WILSON, NICHOLAS R
Art Unit
2611
Tech Center
2600 — Communications
Assignee
BMC MEDICAL CO., LTD.
OA Round
1 (Non-Final)
87%
Grant Probability
Favorable
1-2
OA Rounds
1y 12m
To Grant
99%
With Interview

Examiner Intelligence

Grants 87% — above average
87%
Career Allow Rate
467 granted / 537 resolved
+25.0% vs TC avg
Moderate +12% lift
Without
With
+12.1%
Interview Lift
resolved cases with interview
Fast prosecutor
1y 12m
Avg Prosecution
25 currently pending
Career history
562
Total Applications
across all art units

Statute-Specific Performance

§101
9.5%
-30.5% vs TC avg
§103
41.1%
+1.1% vs TC avg
§102
24.0%
-16.0% vs TC avg
§112
14.8%
-25.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 537 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1, 2, 11, 12, 13, 21 are rejected under 35 U.S.C. 102(a)(1)as being anticipated by Lashinsky et al. (US 2020/0384229)(Hereinafter referred to as Lashinsky). Regarding claim 1, Lashinsky teaches A virtual wearing method for a mask (A method of identifying a particular mask for a patient for use in delivering a flow of breathing gas to the patient includes capturing with a visual presentation and interaction component a plurality of images of the patient; receiving with the visual presentation and interaction component a number of responses from the patient to questions presented to the patient, eliminating one or more masks from a pool of potential masks for the patent based on at least one of the responses, utilizing at least some images of the plurality of images to determine the particular mask for the patient from the pool of potential masks, and identifying the particular mask to the patient. See abstract)(See figure 39), wherein the method comprises: acquiring a facial image of a user (Upon selection of “Begin Scanning” button 206, software application/tool 41 will cause a patient scanning screen 220 to be displayed, as will now be discussed in conjunction with FIGS. 18-29. In the example embodiment described herein, scanning of a patient P is carried out by a user such as previously described (e.g., without limitation, a clinician or DME provider) holding Visual Presentation and Interaction Component 20 using rearward facing camera 30 and 3D imaging apparatus 32 of Visual Presentation and Interaction Component 20 to capture images of patient P. However, it is to be appreciated that scanning of a patient may be carried out by the patient them self in a “selfie” type scanning mode using forward facing camera 28 and 3D imaging apparatus of Visual Presentation and Interaction Component 20. Additionally, it is to be appreciated that other image capturing devices in addition to, or in place of, one or more of forward facing camera 28, rearward facing camera 30, and/or 3D imaging apparatus 32 may be employed for capturing images of a patient without varying from the scope of the present invention. See paragraph [0047])( Patient scanning screen 220 includes an image display area 222 and a message area 224. Image display area 222 may include an indicator 226, which in the example illustrated is in the form of a dashed elliptical line, provided therein for assisting in obtaining optimum placement of Visual Presentation and Interaction Component 20 with respect to patient P or vice-versa. In the example embodiment illustrated in FIGS. 18-29, video images captured by rearward facing camera 30 are displayed in image display area 222 to help guide the user in obtaining a successful 3D scan. Alternatively, video images captured by forward facing camera 28 may be displayed in image display area 222 if electronic device is being operated by the patient them self to conduct a 3D scan of his or herself. In such second instance, images captured by front facing camera 28 would be used to assist the patient in obtaining a successful 3D scan of his or herself. See paragraph [0048]); determining actual facial feature data according to the facial image (See figure 36, processing facial geometry)(the facial geometries of the patient determined from the scanning operation. See paragraph [0058]); determining and displaying one or more matched first masks of various models according to the actual facial feature data (See figure 37, calculating mask recommendations)( During such processing, software application/tool 41 may cause one or more of a processing screen 300 (FIG. 36) and/or a calculating screen 310 (FIG. 37) to be displayed on display 26, providing an indication to the user that software application/tool 41 is working to determine masks for the patient. See paragraph [0056])( Mask suggestion portion 324 of mask suggestion screen 320 includes a ranked selectable listing of different masks and sizing details thereof determined to be a best fit for the patient based on potential masks available (e.g., those indicated on Settings screen 132 previously discussed), dimensional information (e.g., geometries, dimensions, etc.) of the potential masks available, the facial geometries of the patient determined from the scanning operation, and the patient responses previously discussed. Such listing may also include an image of each mask. In the example shown in FIGS. 38-40, mask suggestion portion includes four different user selectable mask suggestions 326A-326D, however, it is to be appreciated that the quantity of mask suggestions provided may vary without varying from the scope of the present concept. See paragraph [0058]); determining a target mask from the one or more matched first masks according to received first input from the user (As shown in FIG. 39, software application/tool 41 will further provide a 3D mask image 332 of a particular mask arrangement from listing portion 322 upon selection thereof by a user. In the example shown in FIG. 39, the “#1 Match”, mask suggestion 326A was selected and thus mask image 332 thereof is displayed on 3D model 330. As an alternative to shaded 3D model 330, a user may select a “View on Glass Head” button 334 provided on mask suggestion screen 320. Upon selection of “View on Glass Head” button 334, software application/tool 41 changes the appearance of 3D model 330 from a shaded appearance to a glass-like 3D model 330′, such as shown in FIG. 40 which is rotatable similar to shaded 3D model 330. See paragraph [0059]); and generating wearing picture information according to the facial image and the target mask, and displaying the wearing picture information (As shown in FIG. 39, software application/tool 41 will further provide a 3D mask image 332 of a particular mask arrangement from listing portion 322 upon selection thereof by a user. In the example shown in FIG. 39, the “#1 Match”, mask suggestion 326A was selected and thus mask image 332 thereof is displayed on 3D model 330. As an alternative to shaded 3D model 330, a user may select a “View on Glass Head” button 334 provided on mask suggestion screen 320. Upon selection of “View on Glass Head” button 334, software application/tool 41 changes the appearance of 3D model 330 from a shaded appearance to a glass-like 3D model 330′, such as shown in FIG. 40 which is rotatable similar to shaded 3D model 330. See paragraph [0059]). Regarding claim 2, Lashinsky teaches The method according to claim 1, wherein after determining the actual facial feature data according to the facial image, the method further comprises: determining a geometric relationship between positions of key features according to the actual facial feature data; determining whether a face of the user is skewed according to the geometric relationship between the positions of the key features; and when it is determined that the face of the user is skewed, generating prompt information to prompt the user to adjust facial pose (See figure 36, processing facial geometry)(the facial geometries of the patient determined from the scanning operation. See paragraph [0058]) (In such instances where software application/tool 41 has determined that the face of patient P is not properly located/positioned, a corrective instruction 232 is provided in message area 224 to assist in correcting the positioning of the patient. See paragraph [0049]). Regarding claim 11, Lashinsky teaches A terminal device, comprising a display, a processor, a memory, and a program or an instruction stored on the memory and executable on the processor, wherein the program or the instruction, when executed by the processor, implements operations (The exemplary Visual Presentation and Interaction Component 20 is a tablet PC and includes a housing 22, an input apparatus 24 (which in the illustrated embodiment is a button), a touchscreen display 26, at least one of a forward facing camera 28 or a rearward facing camera 30, a depth camera 32, and a processor apparatus 34 (FIG. 4) disposed in housing 22. A user is able to provide input into processor apparatus 34 using input apparatus 24 and touchscreen display 26. Processor apparatus 34 provides output signals to touchscreen display 26 to enable touchscreen display 26 to display information to the user as described in detail herein. See paragraph [0030])( Processor apparatus 34 comprises a processor 36, a fixed disk storage device 37, and a memory module 38. Processor 36 may be, for example and without limitation, a microprocessor (μP) that interfaces with memory module 38. Memory module 38 can be any one or more of a variety of types of internal and/or external storage media such as, without limitation, RAM, ROM, EPROM(s), EEPROM(s), FLASH, and the like that provide a storage register, i.e., a machine readable medium, for data storage such as in the fashion of an internal storage area of a computer, and can be volatile memory or nonvolatile memory. Fixed disk storage device 37 has stored therein a number of routines that are executable by processor 36 See paragraph [0031]) comprising: acquiring a facial image of a user (Upon selection of “Begin Scanning” button 206, software application/tool 41 will cause a patient scanning screen 220 to be displayed, as will now be discussed in conjunction with FIGS. 18-29. In the example embodiment described herein, scanning of a patient P is carried out by a user such as previously described (e.g., without limitation, a clinician or DME provider) holding Visual Presentation and Interaction Component 20 using rearward facing camera 30 and 3D imaging apparatus 32 of Visual Presentation and Interaction Component 20 to capture images of patient P. However, it is to be appreciated that scanning of a patient may be carried out by the patient them self in a “selfie” type scanning mode using forward facing camera 28 and 3D imaging apparatus of Visual Presentation and Interaction Component 20. Additionally, it is to be appreciated that other image capturing devices in addition to, or in place of, one or more of forward facing camera 28, rearward facing camera 30, and/or 3D imaging apparatus 32 may be employed for capturing images of a patient without varying from the scope of the present invention. See paragraph [0047])( Patient scanning screen 220 includes an image display area 222 and a message area 224. Image display area 222 may include an indicator 226, which in the example illustrated is in the form of a dashed elliptical line, provided therein for assisting in obtaining optimum placement of Visual Presentation and Interaction Component 20 with respect to patient P or vice-versa. In the example embodiment illustrated in FIGS. 18-29, video images captured by rearward facing camera 30 are displayed in image display area 222 to help guide the user in obtaining a successful 3D scan. Alternatively, video images captured by forward facing camera 28 may be displayed in image display area 222 if electronic device is being operated by the patient them self to conduct a 3D scan of his or herself. In such second instance, images captured by front facing camera 28 would be used to assist the patient in obtaining a successful 3D scan of his or herself. See paragraph [0048]); determining actual facial feature data according to the facial image (See figure 36, processing facial geometry)(the facial geometries of the patient determined from the scanning operation. See paragraph [0058]); determining and displaying one or more matched first masks of various models according to the actual facial feature data (See figure 37, calculating mask recommendations)( During such processing, software application/tool 41 may cause one or more of a processing screen 300 (FIG. 36) and/or a calculating screen 310 (FIG. 37) to be displayed on display 26, providing an indication to the user that software application/tool 41 is working to determine masks for the patient. See paragraph [0056])( Mask suggestion portion 324 of mask suggestion screen 320 includes a ranked selectable listing of different masks and sizing details thereof determined to be a best fit for the patient based on potential masks available (e.g., those indicated on Settings screen 132 previously discussed), dimensional information (e.g., geometries, dimensions, etc.) of the potential masks available, the facial geometries of the patient determined from the scanning operation, and the patient responses previously discussed. Such listing may also include an image of each mask. In the example shown in FIGS. 38-40, mask suggestion portion includes four different user selectable mask suggestions 326A-326D, however, it is to be appreciated that the quantity of mask suggestions provided may vary without varying from the scope of the present concept. See paragraph [0058]); determining a target mask from the one or more matched first masks according to received first input from the user (As shown in FIG. 39, software application/tool 41 will further provide a 3D mask image 332 of a particular mask arrangement from listing portion 322 upon selection thereof by a user. In the example shown in FIG. 39, the “#1 Match”, mask suggestion 326A was selected and thus mask image 332 thereof is displayed on 3D model 330. As an alternative to shaded 3D model 330, a user may select a “View on Glass Head” button 334 provided on mask suggestion screen 320. Upon selection of “View on Glass Head” button 334, software application/tool 41 changes the appearance of 3D model 330 from a shaded appearance to a glass-like 3D model 330′, such as shown in FIG. 40 which is rotatable similar to shaded 3D model 330. See paragraph [0059]); and generating wearing picture information according to the facial image and the target mask, and displaying the wearing picture information (As shown in FIG. 39, software application/tool 41 will further provide a 3D mask image 332 of a particular mask arrangement from listing portion 322 upon selection thereof by a user. In the example shown in FIG. 39, the “#1 Match”, mask suggestion 326A was selected and thus mask image 332 thereof is displayed on 3D model 330. As an alternative to shaded 3D model 330, a user may select a “View on Glass Head” button 334 provided on mask suggestion screen 320. Upon selection of “View on Glass Head” button 334, software application/tool 41 changes the appearance of 3D model 330 from a shaded appearance to a glass-like 3D model 330′, such as shown in FIG. 40 which is rotatable similar to shaded 3D model 330. See paragraph [0059]). Regarding claim 12, Lashinsky teaches A non-transitory computer-readable storage medium, storing a program or an instruction thereon, wherein, the program or the instruction, when executed by a processor, implements operations (The exemplary Visual Presentation and Interaction Component 20 is a tablet PC and includes a housing 22, an input apparatus 24 (which in the illustrated embodiment is a button), a touchscreen display 26, at least one of a forward facing camera 28 or a rearward facing camera 30, a depth camera 32, and a processor apparatus 34 (FIG. 4) disposed in housing 22. A user is able to provide input into processor apparatus 34 using input apparatus 24 and touchscreen display 26. Processor apparatus 34 provides output signals to touchscreen display 26 to enable touchscreen display 26 to display information to the user as described in detail herein. See paragraph [0030])( Processor apparatus 34 comprises a processor 36, a fixed disk storage device 37, and a memory module 38. Processor 36 may be, for example and without limitation, a microprocessor (μP) that interfaces with memory module 38. Memory module 38 can be any one or more of a variety of types of internal and/or external storage media such as, without limitation, RAM, ROM, EPROM(s), EEPROM(s), FLASH, and the like that provide a storage register, i.e., a machine readable medium, for data storage such as in the fashion of an internal storage area of a computer, and can be volatile memory or nonvolatile memory. Fixed disk storage device 37 has stored therein a number of routines that are executable by processor 36 See paragraph [0031]) comprising: acquiring a facial image of a user(Upon selection of “Begin Scanning” button 206, software application/tool 41 will cause a patient scanning screen 220 to be displayed, as will now be discussed in conjunction with FIGS. 18-29. In the example embodiment described herein, scanning of a patient P is carried out by a user such as previously described (e.g., without limitation, a clinician or DME provider) holding Visual Presentation and Interaction Component 20 using rearward facing camera 30 and 3D imaging apparatus 32 of Visual Presentation and Interaction Component 20 to capture images of patient P. However, it is to be appreciated that scanning of a patient may be carried out by the patient them self in a “selfie” type scanning mode using forward facing camera 28 and 3D imaging apparatus of Visual Presentation and Interaction Component 20. Additionally, it is to be appreciated that other image capturing devices in addition to, or in place of, one or more of forward facing camera 28, rearward facing camera 30, and/or 3D imaging apparatus 32 may be employed for capturing images of a patient without varying from the scope of the present invention. See paragraph [0047])( Patient scanning screen 220 includes an image display area 222 and a message area 224. Image display area 222 may include an indicator 226, which in the example illustrated is in the form of a dashed elliptical line, provided therein for assisting in obtaining optimum placement of Visual Presentation and Interaction Component 20 with respect to patient P or vice-versa. In the example embodiment illustrated in FIGS. 18-29, video images captured by rearward facing camera 30 are displayed in image display area 222 to help guide the user in obtaining a successful 3D scan. Alternatively, video images captured by forward facing camera 28 may be displayed in image display area 222 if electronic device is being operated by the patient them self to conduct a 3D scan of his or herself. In such second instance, images captured by front facing camera 28 would be used to assist the patient in obtaining a successful 3D scan of his or herself. See paragraph [0048]); determining actual facial feature data according to the facial image (See figure 36, processing facial geometry)(the facial geometries of the patient determined from the scanning operation. See paragraph [0058]); determining and displaying one or more matched first masks of various models according to the actual facial feature data (See figure 37, calculating mask recommendations)( During such processing, software application/tool 41 may cause one or more of a processing screen 300 (FIG. 36) and/or a calculating screen 310 (FIG. 37) to be displayed on display 26, providing an indication to the user that software application/tool 41 is working to determine masks for the patient. See paragraph [0056])( Mask suggestion portion 324 of mask suggestion screen 320 includes a ranked selectable listing of different masks and sizing details thereof determined to be a best fit for the patient based on potential masks available (e.g., those indicated on Settings screen 132 previously discussed), dimensional information (e.g., geometries, dimensions, etc.) of the potential masks available, the facial geometries of the patient determined from the scanning operation, and the patient responses previously discussed. Such listing may also include an image of each mask. In the example shown in FIGS. 38-40, mask suggestion portion includes four different user selectable mask suggestions 326A-326D, however, it is to be appreciated that the quantity of mask suggestions provided may vary without varying from the scope of the present concept. See paragraph [0058]); determining a target mask from the one or more matched first masks according to received first input from the user (As shown in FIG. 39, software application/tool 41 will further provide a 3D mask image 332 of a particular mask arrangement from listing portion 322 upon selection thereof by a user. In the example shown in FIG. 39, the “#1 Match”, mask suggestion 326A was selected and thus mask image 332 thereof is displayed on 3D model 330. As an alternative to shaded 3D model 330, a user may select a “View on Glass Head” button 334 provided on mask suggestion screen 320. Upon selection of “View on Glass Head” button 334, software application/tool 41 changes the appearance of 3D model 330 from a shaded appearance to a glass-like 3D model 330′, such as shown in FIG. 40 which is rotatable similar to shaded 3D model 330. See paragraph [0059]); and generating wearing picture information according to the facial image and the target mask, and displaying the wearing picture information (As shown in FIG. 39, software application/tool 41 will further provide a 3D mask image 332 of a particular mask arrangement from listing portion 322 upon selection thereof by a user. In the example shown in FIG. 39, the “#1 Match”, mask suggestion 326A was selected and thus mask image 332 thereof is displayed on 3D model 330. As an alternative to shaded 3D model 330, a user may select a “View on Glass Head” button 334 provided on mask suggestion screen 320. Upon selection of “View on Glass Head” button 334, software application/tool 41 changes the appearance of 3D model 330 from a shaded appearance to a glass-like 3D model 330′, such as shown in FIG. 40 which is rotatable similar to shaded 3D model 330. See paragraph [0059]). Regarding claim 13, Lashinsky teaches The terminal device according to claim 11, wherein after the operation of determining the actual facial feature data according to the facial image, the operations further comprise: determining a geometric relationship between positions of key features according to the actual facial feature data; determining whether a face of the user is skewed according to the geometric relationship between the positions of the key features; and when it is determined that the face of the user is skewed, generating prompt information to prompt the user to adjust facial pose. (See figure 36, processing facial geometry)(the facial geometries of the patient determined from the scanning operation. See paragraph [0058]) (In such instances where software application/tool 41 has determined that the face of patient P is not properly located/positioned, a corrective instruction 232 is provided in message area 224 to assist in correcting the positioning of the patient. See paragraph [0049]). Regarding claim 21, Lashinsky teaches The non-transitory computer-readable storage medium according to claim 12, wherein after the operation of determining the actual facial feature data according to the facial image, the operations further comprise: determining a geometric relationship between positions of key features according to the actual facial feature data; determining whether a face of the user is skewed according to the geometric relationship between the positions of the key features; and when it is determined that the face of the user is skewed, generating prompt information to prompt the user to adjust facial pose. (See figure 36, processing facial geometry)(the facial geometries of the patient determined from the scanning operation. See paragraph [0058]) (In such instances where software application/tool 41 has determined that the face of patient P is not properly located/positioned, a corrective instruction 232 is provided in message area 224 to assist in correcting the positioning of the patient. See paragraph [0049]). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 3, 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lashinsky et al. (US 2020/0384229)(Hereinafter referred to as Lashinsky) in view of Bai et al. (US 2016/0180587)(Hereinafter referred to as Bai). Regarding claim 3, Lashinsky teaches The method according to claim 2, but is silent to wherein determining and displaying the one or more matched first masks of various models according to the actual facial feature data comprises: acquiring preset corresponding relationships between masks of different models and facial feature data; determining the one or more matched first masks of various models corresponding to the actual facial feature data according to the preset corresponding relationships; and displaying the one or more matched first masks. Bai teaches measuring surfaces of a user’s face and in which various measurements of the face including the nose and chin are determined and utilized in combination with dimensions of a mask to determine a seal score based on a fitting analysis (FIG. 20 depicts an exemplary GUI screenshot of a sealing metric of a PPE device as computed during a virtual fit. In this figure, an exemplary GUI screenshot 2000 shows a user virtually wearing a PPE device 2005. The screenshot 2000 also depicts a computed fitting metric, in this example, the quality of a PPE-facial seal 2010. The quality of the PPE-facial seal 2000 may be color coded, as depicted here. The quality of the PPE-facial seal may be indicated by a single numerical metric 2012, for example. The screenshot may permit the user to select from a list of PPE devices via a graphical selection area 715. These PPE devices may be have been selected by the virtual fitting station based upon matching criteria. The matching criteria may have been supplied by the user. The matching criteria may have been predetermined by the user's employer, for example. The user may be able to select a different size of the PPE device by selecting among a series of size buttons 2025. In some embodiments, the user may be able to navigate forward and backwards through the various user screens using navigation buttons, for example. See paragraph [0075])( In FIG. 11, an exemplary menton locating method 1100 is described from a vantage point of the processor 165 depicted in FIG. 1. The exemplary menton locating method 1100 may be performed as part of the fit menton region step 1015 of the fit prediction method 1000, for example. The menton locating method 1100 begins by identifying facial features in a virtual face 1105. For example, eyes, nose and a mouth may be identified by characteristics that are unique to each of these features. A nose-tip, for example, may be located by locating the highest Z-elevation location in the virtual face. Then the processor 165 calculates a mid-sagittal plane of symmetry 1110. The processor may, for example, convolve a mirror image of the virtual face with the non-mirrored virtual face. The translated location where the mirrored image best matches the non-mirrored image may then be used to find a line of symmetry. A line of symmetry may be determined by finding the image locations where the mirrored image aligns with the non-mirrored image See paragraph [0065]) Lashinsky and Bai teach of virtual mask visualization and selection and Bai teaches that by taking particular measurements a seal rating can be determined based on the individuals face and particular mask (See figure 20), therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine the system of Lashinsky with the relationships between masks and individual facial features to determine a seal rating of Bai such that the user could pick the mask that would provide the most protection for the particular mask use case. Regarding claim 14, Lashinsky teaches The terminal device according to claim 13, but is silent to wherein the operation of determining and displaying the one or more matched first masks of various models according to the actual facial feature data comprises: acquiring preset corresponding relationships between masks of different models and facial feature data; determining the one or more matched first masks of various models corresponding to the actual facial feature data according to the preset corresponding relationships; and displaying the one or more matched first masks. Bai teaches measuring surfaces of a user’s face and in which various measurements of the face including the nose and chin are determined and utilized in combination with dimensions of a mask to determine a seal score based on a fitting analysis (FIG. 20 depicts an exemplary GUI screenshot of a sealing metric of a PPE device as computed during a virtual fit. In this figure, an exemplary GUI screenshot 2000 shows a user virtually wearing a PPE device 2005. The screenshot 2000 also depicts a computed fitting metric, in this example, the quality of a PPE-facial seal 2010. The quality of the PPE-facial seal 2000 may be color coded, as depicted here. The quality of the PPE-facial seal may be indicated by a single numerical metric 2012, for example. The screenshot may permit the user to select from a list of PPE devices via a graphical selection area 715. These PPE devices may be have been selected by the virtual fitting station based upon matching criteria. The matching criteria may have been supplied by the user. The matching criteria may have been predetermined by the user's employer, for example. The user may be able to select a different size of the PPE device by selecting among a series of size buttons 2025. In some embodiments, the user may be able to navigate forward and backwards through the various user screens using navigation buttons, for example. See paragraph [0075])( In FIG. 11, an exemplary menton locating method 1100 is described from a vantage point of the processor 165 depicted in FIG. 1. The exemplary menton locating method 1100 may be performed as part of the fit menton region step 1015 of the fit prediction method 1000, for example. The menton locating method 1100 begins by identifying facial features in a virtual face 1105. For example, eyes, nose and a mouth may be identified by characteristics that are unique to each of these features. A nose-tip, for example, may be located by locating the highest Z-elevation location in the virtual face. Then the processor 165 calculates a mid-sagittal plane of symmetry 1110. The processor may, for example, convolve a mirror image of the virtual face with the non-mirrored virtual face. The translated location where the mirrored image best matches the non-mirrored image may then be used to find a line of symmetry. A line of symmetry may be determined by finding the image locations where the mirrored image aligns with the non-mirrored image See paragraph [0065]) Lashinsky and Bai teach of virtual mask visualization and selection and Bai teaches that by taking particular measurements a seal rating can be determined based on the individuals face and particular mask (See figure 20), therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine the system of Lashinsky with the relationships between masks and individual facial features to determine a seal rating of Bai such that the user could pick the mask that would provide the most protection for the particular mask use case. Allowable Subject Matter Claims 4-9, and 15-20 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: The prior art of record alone or in combination is silent to the limitations “determining a first mask type according to the first sub-corresponding relationship and the facial features in the actual facial feature data” of claim 4 when read in light of the rest of the limitations in claim 4 and the claims to which claim 4 depends and thus claim 4 contains allowable subject matter. Claim 5 contains allowable subject matter because it depends on a claim that contains allowable subject matter. The prior art of record alone or in combination is silent to the limitations “ wherein: the facial feature data comprises size information of a nose, the size information of the nose comprises nose width information, nose bridge height information, and distance information from nose tip to an upper jaw and distance information from the nose tip to a lower jaw; or the facial feature data comprises the size information of the nose, the size information of the nose comprises the nose width information, the nose bridge height information, and the distance information from the nose tip to the upper jaw; or the facial feature data comprises the size information of the nose, the size information of the nose comprises the nose width information, the nose bridge height information, and the distance information from the nose tip to the lower jaw. ” of claim 6 when read in light of the rest of the limitations in claim 6 and the claims to which claim 6 depends and thus claim 6 contains allowable subject matter. Claims 7-9 contain allowable subject matter because they depend on a claim that contains allowable subject matter. The prior art of record alone or in combination is silent to the limitations “determining a first mask size according to the second sub-corresponding relationship and size information of the key features in the actual facial feature data” of claim 15 when read in light of the rest of the limitations in claim 15 and the claims to which claim 15 depends and thus claim 15 contains allowable subject matter. Claim 16 contains allowable subject matter because it depends on a claim that contains allowable subject matter. The prior art of record alone or in combination is silent to the limitations “ wherein: the facial feature data comprises size information of a nose, the size information of the nose comprises nose width information, nose bridge height information, and distance information from nose tip to an upper jaw and distance information from the nose tip to a lower jaw; or the facial feature data comprises the size information of the nose, the size information of the nose comprises the nose width information, the nose bridge height information, and the distance information from the nose tip to the upper jaw; or the facial feature data comprises the size information of the nose, the size information of the nose comprises the nose width information, the nose bridge height information, and the distance information from the nose tip to the lower jaw. ” of claim 17 when read in light of the rest of the limitations in claim 17 and the claims to which claim 17 depends and thus claim 17 contains allowable subject matter. Claims 18-20 contain allowable subject matter because they depend on a claim that contains allowable subject matter. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to NICHOLAS R WILSON whose telephone number is (571)272-0936. The examiner can normally be reached M-F 7:30-5:00PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee Tung can be reached at (572)-272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /NICHOLAS R WILSON/Primary Examiner, Art Unit 2611
Read full office action

Prosecution Timeline

Jun 24, 2024
Application Filed
Feb 07, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602869
APPARATUS, SYSTEMS AND METHODS FOR PROCESSING IMAGES
2y 5m to grant Granted Apr 14, 2026
Patent 12602891
TELEPORTATION SYSTEM COMBINING VIRTUAL REALITY AND AUGMENTED REALITY
2y 5m to grant Granted Apr 14, 2026
Patent 12579605
INFORMATION PROCESSING DEVICE AND METHOD OF CONTROLLING DISPLAY DEVICE
2y 5m to grant Granted Mar 17, 2026
Patent 12567215
SYSTEM AND METHOD OF CONTROLLING SYSTEM
2y 5m to grant Granted Mar 03, 2026
Patent 12561911
3D CAGE GENERATION USING SIGNED DISTANCE FUNCTION APPROXIMANT
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
87%
Grant Probability
99%
With Interview (+12.1%)
1y 12m
Median Time to Grant
Low
PTA Risk
Based on 537 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month