Prosecution Insights
Last updated: April 19, 2026
Application No. 19/226,224

Smart Article Visual Communication Based On Facial Movement

Final Rejection §103
Filed
Jun 03, 2025
Examiner
MISHLER, ROBIN J
Art Unit
2628
Tech Center
2600 — Communications
Assignee
Nantworks LLC
OA Round
4 (Final)
69%
Grant Probability
Favorable
5-6
OA Rounds
2y 5m
To Grant
75%
With Interview

Examiner Intelligence

Grants 69% — above average
69%
Career Allow Rate
488 granted / 707 resolved
+7.0% vs TC avg
Moderate +6% lift
Without
With
+5.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
28 currently pending
Career history
735
Total Applications
across all art units

Statute-Specific Performance

§101
1.9%
-38.1% vs TC avg
§103
56.4%
+16.4% vs TC avg
§102
35.2%
-4.8% vs TC avg
§112
4.6%
-35.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 707 resolved cases

Office Action

§103
DETAILED ACTION Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 3-4, 7 and 10-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Duncan (US 2016/0029716) in view of Dange (US 2019/0035153). Regarding claim 1, Duncan discloses 1 smart mask system comprising: a first material layer (16, fig. 2) configured to cover a portion of a face of a person (see fig. 2); at least one display (30, fig. 4) connected to the first material layer and configured to display images (par. 29-30); a location sensor (78, fig. 9) configured to obtain location data associated with the smart mask system (para. 55; wherein input is a location of the user); and a computer-based control module comprising: at least one non-transitory memory (83, fig. 9) storing a plurality of indexed image sets (para. 33) and software instructions, wherein each of the plurality of image sets are indexed based on location data (para. 33, 55; wherein certain images are used in certain locations of the user, wherein image selection is predetermined by the user); and at least one processor (80, fig. 9) coupled with the at least one non-transitory memory and that performs the following operations upon execution of the software instructions: determining a context based on a location derived from the location data obtained from the location sensor (para. 55; wherein the context of a location is determined, e.g. whether the user is at home or at the office); selecting an indexed image set from the plurality of indexed image sets based on at least the determined context (para. 33, 55; e.g. time of day) and the location data (para. 33, 55; wherein images displayed are based on time of day and location of the user, as predefined by the user), wherein selecting the indexed image set includes selecting the indexed image set via at least one of a look up (para. 55, 33; wherein images are retrieved based on the location of the user, wherein e.g. each image is indexed for use by the mask to a particular location of the user); and rendering at least one image from the location-based selected image set on the display (para. 33, 55), wherein the display comprises multiple light emitting diodes (see OLED in para. 32) configured to render a visual digital image covering at least a portion of the first material layer (para. 32; wherein the display layer 30 provides an image on the outward facing cover first material layer 16). Duncan fails to discloses selecting a formal image when the determined context is a formal context. Dange discloses selecting a formal image when the determined context is a forma context (para. 49, 51, 62; wherein context of a location is determined and corresponding images are displayed by a display, wherein e.g. formal images are used in a location that has been determined to be/have a formal context), and selecting an informal image when the determined context is an informal context (para. 49, 51, 62). When the invention was made (pre-AIA ) or before the effective filing date of the claimed invention (AIA ), it would have been obvious to one of ordinary skill in the art to include the teachings of Dange in the device of Duncan. The motivation for doing so would have been to select and display images that match the current context of the location of the user (Dange; para. 49, 51, 62, wherein when located in an area that has a formal context, formal images are displayed), such that the user does not look out of place in regards to the context of the user’s location. Regarding claim 3, Duncan discloses wherein the formal context corresponds to an office location (see work in para. 55). Regarding claim 4, Duncan discloses wherein the informal context corresponds to at least one of a park location and a sports venue location (see out in public in para. 55). Regarding claim 7, Duncan discloses wherein the location data is derived from at least one of the following: a GPS sensor, an internal movement unit, SLAM data, vSLAM data, and a wireless triangulation device (see para. 55; wherein the location of user is determined). Regarding claim 10, Duncan discloses further comprising a plurality of sensors configured to detect facial movements of the person (para. 33, 29, 55). Regarding claim 11, Duncan discloses wherein the plurality of sensors comprises at least one of piezoelectric sensors, thermal sensors, stretch sensors, and compression sensors (para. 36). Regarding claim 12, Duncan discloses wherein the computer-based control module is configured to render the at least one image based on both the detected facial movements and the determined context (para. 33, 55). Regarding claim 13, Duncan discloses wherein the at least one display comprises a high- resolution display having a resolution greater than 70 dots per inch (see OLED in para. 32). Regarding claim 14, Duncan discloses wherein the at least one display comprises a low- resolution LED array having a resolution less than 70 dots per inch (see e-ink in para. 32). Regarding claim 15, Duncan discloses wherein the at least one display permits airflow (para. 22). Regarding claim 16, Duncan discloses further comprising at least one filter comprising antimicrobial material (para. 20-21). Regarding claim 17, Duncan discloses wherein the computer-based control module is configured to automatically switch between image sets when a change in location is detected (para. 33, 55). Regarding claim 18, Duncan discloses further comprising a wireless transceiver allowing communications with external devices (para. 54, 51). Regarding claim 19, Duncan discloses wherein the wireless transceiver operates within a personal area network (para. 51, 54). Regarding claim 20, Duncan discloses wherein the computer-based control module is configured to modify the rendered at least one image to thereby reduce facial recognition (para 35). Claim(s) 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Duncan in view of Dange in further view of Claire (US 2021/0081749). Regarding claim 8, Duncan fails to disclose selecting a language based on the determined location. Claire discloses wherein the computer-based control module is further configured to select a language module based on the location (para. 72; wherein the message is translated to the appropriate language of location based on the location of the user). When the invention was made (pre-AIA ) or before the effective filing date of the claimed invention (AIA ), it would have been obvious to one of ordinary skill in the art to include the teachings of Claire in the device of Duncan. The motivation for doing so would have been to determine the appropriate language of user by GPS (Claire; para. 72), resulting in better communication between the user and locals. Claim(s) 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Duncan in view of Dange in view of Claire in further view of Morgado (US 2018/0225988). Regarding claim 9, Duncan fails to disclose displaying translated text. Morgado discloses wherein the computer-based control module is configured to display translated text based on the selected language module (fig. 4C-D and para. 41-42, 23; wherein the input language is outputted translated on a display based on the selected language). When the invention was made (pre-AIA ) or before the effective filing date of the claimed invention (AIA ), it would have been obvious to one of ordinary skill in the art to include the teachings of Morgado in the device of Duncan in view of Claire. The motivation for doing so would have been to have a wearable device which is able to display translated text to a user or third party (Morgado; fig. 4C-D and para. 41-42, 23), resulting in better communication/understanding. Allowable Subject Matter Claims 5-6 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Response to Arguments Applicant’s arguments with respect to claims have been considered but are moot in view of new grounds of rejection. See new citations above. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ROBIN J MISHLER whose telephone number is (571)270-7251. The examiner can normally be reached on 8:00-5:00 M-F. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, NITIN PATEL can be reached on (571)272-7677. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ROBIN J MISHLER/ Primary Examiner, Art Unit 2628
Read full office action

Prosecution Timeline

Jun 03, 2025
Application Filed
Jul 07, 2025
Non-Final Rejection — §103
Sep 10, 2025
Response Filed
Sep 15, 2025
Final Rejection — §103
Nov 17, 2025
Response after Non-Final Action
Nov 21, 2025
Request for Continued Examination
Nov 25, 2025
Response after Non-Final Action
Dec 01, 2025
Non-Final Rejection — §103
Feb 02, 2026
Response Filed
Feb 13, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598714
LOCKING STRUCTURE AND ELECTRONIC DEVICE
2y 5m to grant Granted Apr 07, 2026
Patent 12596453
TOUCH SENSOR AND A METHOD FOR DETECTING A USER'S TOUCH
2y 5m to grant Granted Apr 07, 2026
Patent 12585351
TOUCH DISPLAY DEVICE
2y 5m to grant Granted Mar 24, 2026
Patent 12568688
DISPLAY DEVICE AND TILED DISPLAY DEVICE
2y 5m to grant Granted Mar 03, 2026
Patent 12567184
VIDEO PROCESSING METHOD, APPARATUS AND DEVICE
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
69%
Grant Probability
75%
With Interview (+5.9%)
2y 5m
Median Time to Grant
High
PTA Risk
Based on 707 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month