Prosecution Insights
Last updated: April 19, 2026
Application No. 18/544,960

COMMUNICATION APPARATUS, COMMUNICATION METHOD, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM

Non-Final OA §101§102§112
Filed
Dec 19, 2023
Examiner
HOLTZCLAW, MICHAEL T.
Art Unit
3796
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
NEC Corporation
OA Round
1 (Non-Final)
78%
Grant Probability
Favorable
1-2
OA Rounds
2y 10m
To Grant
92%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
173 granted / 223 resolved
+7.6% vs TC avg
Moderate +14% lift
Without
With
+14.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
34 currently pending
Career history
257
Total Applications
across all art units

Statute-Specific Performance

§101
5.9%
-34.1% vs TC avg
§103
33.7%
-6.3% vs TC avg
§102
18.9%
-21.1% vs TC avg
§112
28.5%
-11.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 223 resolved cases

Office Action

§101 §102 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The Information Disclosure Statement filed 12/19/2023 has been considered by the Examiner. Drawings The drawings are objected to as failing to comply with 37 CFR 1.84(p)(5) because they do not include the following reference sign(s) mentioned in the description: 1000. Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. *Reference character 1000 is mentioned in Par. [0052] and [0054], but is not shown in the drawings. Specification The disclosure is objected to because of the following informalities: Page 8, line 2: “the authentication unit 230” should be “the authentication unit 110”. Appropriate correction is required. The lengthy specification has not been checked to the extent necessary to determine the presence of all possible minor errors. Applicant’s cooperation is requested in correcting any errors of which applicant may become aware in the specification. Claim Objections Claim 4 objected to because of the following informalities: Line 3: please indent the line starting with “including the content as at least a part of the first information …”. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1-13 rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Claims 1, 6, and 10 recite the limitation “analyz[ing] an image of a subject person to generate state information indicating a state of the subject person”. A “state” of a subject person is a very broad categorization that includes many subcategories, such as mental state, physical state, psychological state, spiritual state, emotional state, etc. Therefore, claims 1, 6, and 10 are considered to be drawn to a genus. MPEP 2163(II)(A)(3)(a)(ii) explains that the written description must lead a person of ordinary skill in the art to understand that the inventor possessed the entire scope of the claimed invention. Ariad, 598 F.3d at 1353–54. It is also explained that a "representative number of species" means that the species which are adequately described are representative of the entire genus. Thus, when there is substantial variation within the genus, one must describe a sufficient variety of species to reflect the variation within the genus. See AbbVie Deutschland GmbH & Co., KG v. Janssen Biotech, Inc., 759 F.3d 1285, 1300, 111 USPQ2d 1780, 1790 (Fed. Cir. 2014). The Applicant’s specification does have support for certain physical and mental/emotional states (Pars. [0016], [0077] – feeling/physical condition, [0079-0081] – glad feeling, negative feelings (sad, angry, depressed, or the like), and [0095] – disheveled clothing/hair, bad expression/movement/pose), but does not reasonably provide support for a representative number of states of a person (e.g., spiritual, other mental states (such as alertness, concentration, motivation, self-awareness, stress), and other physical states (such as tired, fatigue, energetic, in pain, etc.), in order to reasonably convey that the inventor possessed the entire scope of the claimed invention. *All other claims are rejected due to their dependency on a rejected claim. Claims 1-13 rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Claims 1, 6, and 10 include the limitation of at least one processor configured to “analyze an image of a subject person to generate state information indicating a state of the subject person”. The Applicant’s specification appears to essentially recite this claim limitation in Par. [0013], wherein the functional intent of generating state information indicating a state of a subject person is recited, but without disclosing how this functional intent is achieved. The Applicant’s specification discusses potential results where the state information indicates that the target person is glad (Par. [0079-0080]) or has a negative feeling (Par. [0081]), but does not explain how the processor processes an image in order to generate state information indicating a state of a target person. In other words, there is no written description support for how the processor determines a particular state of the target person, such as gladness or a negative feeling. MPEP 2161.01(I) explains that it is not enough that one skilled in the art could write a program to achieve the claimed function because the specification must explain how the inventor intends to achieve the claimed function to satisfy the written description requirement. See, e.g., Vasudevan Software, Inc. v. MicroStrategy, Inc., 782 F.3d 671, 681-683, 114 USPQ2d 1349, 1356, 1357 (Fed. Cir. 2015). MPEP 2161.01(I) also explains that the description requirement of the patent statute requires a description of an invention, not an indication of a result that one might achieve if one made that invention. Original claims may lack written description when the claims define the invention in functional language specifying a desired result but the specification does not sufficiently describe how the function is performed or the result is achieved. For software, this can occur when the algorithm or steps/procedure for performing the computer function are not explained at all or are not explained in sufficient detail (simply restating the function recited in the claim is not necessarily sufficient). In other words, the algorithm or steps/procedure taken to perform the function must be described with sufficient detail so that one of ordinary skill in the art would understand how the inventor intended the function to be performed (MPEP 2161.01(I), MPEP 2163.02, and MPEP 2181(IV)). *All other claims are rejected due to their dependency on a rejected claim. Claims 1-13 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, because the specification, while being enabling for certain physical and mental/emotional states (Pars. [0016], [0077] – feeling/physical condition, [0079-0081] – glad feeling, negative feelings (sad, angry, depressed, or the like), and [0095] – disheveled clothing/hair, bad expression/movement/pose), does not reasonably provide enablement for all types of states of a person (e.g., spiritual, other mental states (such as alertness, concentration, motivation, self-awareness, stress), and other physical states (such as tired, fatigue, energetic, in pain, etc.). The specification does not enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the invention commensurate in scope with these claims. The Applicant’s specification is not enabling for the claim limitation of instant claims 1, 6, and 10 of “generate state information indicating a state of a subject person”. The Wands factors detailed in MPEP 2164.01(a) have been considered. For example, (A) the breadth of the claims was considered. The breadth associated with “a state of a target person” is so large and encapsulates many subcategories (i.e., mental state, physical state, emotional state, etc.). Also, (G) the existence of working examples has been considered. The limited examples of physical and mental/emotional states in the Applicant’s specification are not enabling for this broad limitation encapsulating all states of a target person, and would therefore require undue experimentation. An excessive amount of experimentation (Wands Factor (H)) would be necessary to make or use the invention that is commensurate with the scope of generating state information indicating all states of a target person, including spiritual states and other mental states such as anxiety/stress. Only limited examples involving general positive feelings of gladness and negative feelings of sadness, anger, and depression, along with examples of physical states such as disheveled clothing/hair and bad expressions, movement, and poses, are provided. These limited examples do not provide enough direction (Wands Factor (F)) to extrapolate out to the numerous amounts of states of a human. *All other claims are also rejected due to their dependency on a rejected claim. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-13 rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea (mental process) without significantly more. Step 1 Independent claims 1, 6, and 10 are directed to a communication apparatus, a communication method, and a non-transitory computer-readable storage medium storing a program, and thus meets the requirements for step 1. Step 2A, Prong 1 Regarding claims 1, 6, and 10, the following steps recite an abstract idea: “analyz[ing] an image of a subject person to generate state information indicating a state of the subject person” is a mental process when given its broadest reasonable interpretation. As discussed in MPEP 2106.04(a)(2)(III), the mental process grouping includes observations, evaluation, judgements, and opinions. In this case, a human could evaluate an image of a subject person in order to make a judgement about their state. “determin[ing], by using the state information, first information to be transmitted to a first terminal” is a mental process when given its broadest reasonable interpretation. As discussed in MPEP 2106.04(a)(2)(III), the mental process grouping includes observations, evaluation, judgements, and opinions. In this case, a human could make a judgement on first information to be transmitted to a first terminal, based on generated state information. “transmit[ting] the first information from the second terminal to the first terminal” is a mental process when given its broadest reasonable interpretation. As discussed in MPEP 2106.04(a)(2)(III), the mental process grouping includes observations, evaluation, judgements, and opinions. In this case, a human could transmit (e.g., share/communicate) first information to a different person or location. Step 2A – Prong 2 Regarding claims 1, 6, and 10, the claims do not include any additional elements that integrate the abstract idea into a practical application. The following elements do not add any meaningful limitation to the abstract idea: “at least one memory”, “at least one processor”, “a computer”, and “first/second terminal” are all recited with a high level of generality. The at least one memory is described (Fig. 6, # 1030) as a main storage apparatus implemented by a random access memory (RAM) and the like (Par. [0050]). The at least one processor is described as (Fig. 6, # 1020) a processor implemented by a central processing unit (CPU), a graphics processing unit (GPU), and the like (Par. [0049]). The computer is interchangeably described as the processor (Par. [0052] and [0054]). The first terminal is described as possibly being a portable terminal such as a smartphone and a tablet type terminal (Par. [0022]). The second terminal is described as having a similar hardware configuration as the first terminal (Par. [0036], Par. [0054]). The involvement of the “at least one memory”, “at least one processor”, “computer”, and “first/second terminal” is insignificant extra-solution activity in that they amount to generic computer implementation of the abstract idea [MPEP 2106.04(a)(2)(III)(C)]. Furthermore, the “at least one memory”, “at least one processor”, “computer”, and “first/second terminal”, along with their associated functions and components, do not add any meaningful limitation to the abstract idea when considered in combination because these elements are recited at a high level of generality and their related functions and components are merely implementing the abstract idea on a computer. Step 2B The additional elements of claims 1, 6, and 10, when considered either individually or in an ordered combination, are not enough to qualify as significantly more than the abstract idea. As discussed above with respect to the integration of the abstract idea into a practical application, the “at least one memory”, “at least one processor”, “a computer”, and “first/second terminal”, along with their associated functions and components, are recited with a high level of generality and simply amount to implementing the abstract idea on a computer. The additional elements that were considered insignificant extra-solution activity have been re-analyzed and do not amount to anything more than what is well-understood, routine, and conventional. Also, simply appending well-understood, routine, and conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception is not indicative of an inventive concept [MPEP 2106.05(d)]. The involvement of the “at least one memory”, “at least one processor”, “computer” is considered storing and retrieving information in memory, a well-understood, routine, and conventional computer activity [MPEP 2106.05(d)(II) Versata Dev. Group, Inc.]. The involvement of the “first/second terminal” is considered receiving and transmitting data, a well-understood, routine, and conventional computer activity [MPEP 2106.05(d)(II) Symantec]. Additionally, “transmit[ting] first information” is transmitting data, a well-understood, routine, and conventional computer activity [MPEP 2106.05(d)II) Symantec]. Additionally, “a second terminal receiv[ing] an input for permitting the second terminal to communicate with the first terminal” is also considered receiving and transmitting data, a well-understood, routine, and conventional computer activity [MPEP 2106.05(d)(II) Symantec]. In this case, well-known elements of a general computer system are used to implement the abstract idea. Dependent claims Regarding dependent claims 2, 7, and 11, the limitations only further define the abstract idea. Regarding dependent claims 3-5, 8-9, and 12-13, the limitations only further define insignificant extra-solution activity of generic computer implementation of the abstract idea. Therefore, claims 1-13 are unpatentable under 35 U.S.C. 101. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-13 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Mishra, et al. (U.S. PGPub No. 2023/0035981). Regarding claim 1, Mishra teaches (Fig. 1, # 100) a communication apparatus (Par. [0077-0078] – establishing communication between a user 101A and a patient 101B) comprising: (Fig. 1, # 110 – server, 112 – database) at least one memory configured to store instructions (Par. [0062]; Par. [0082]; Par. [0113]; Par. [0117] – it should be appreciated that the components or portions thereof (e.g., microprocessor, memory/storage, interfaces, etc.) of the system can be combined into one or more devices, such as a server); and (Fig. 1, # 110) at least one processor (Par. [0009]; Par. [0082] – the server 110 may have or utilize the database 112 as a non-transitory repository of data accessible to at least one processor of the server 110) configured to execute the instructions to: (Fig. 1, # 103B; Fig. 3A, 301B) analyze an image of a subject person to generate state information indicating a state of the subject person (Par. [0010] – the processor executes a module responsible for analyzing video data received from a user device to determine a mood of the user (e.g., neutral, happy, distressed); Par. [0079] – each of the user devices 102 may include the camera 103B to capture images/video to determine facial expression, mood, body position, movement, etc.; Par. [0088]); determine, by using the state information, first information to be transmitted to a first terminal (Par. [0005] – this information (e.g., information from image capturing device) can be used to determine a state of the user/issue and to automatically notify the doctors/nurses to setup a live call/scheduled call based on the urgency on the patient condition; Par. [0026-0027]; Par. [0090]); and (Fig. 2) transmit, when a second terminal receives an input for permitting the second terminal to communicate with the first terminal, the first information from the second terminal to the first terminal (Par. [0005]; Par. [0011-0012] – using a wake word (i.e., input) to connect to the provider (i.e., permitting communication from a patient (i.e., second terminal) to the provider (i.e., first provider) based on natural language processing of the wake word by the user (i.e., input), Par. [0026-0028]; Par. [0060] – human input (i.e., consent) before performance of the process or operation, Par. [0086]; Par. [0090]). Therefore, claim 1 is unpatentable over Mishra, et al. Regarding claim 2, Mishra teaches the communication apparatus according to claim 1, wherein (Fig. 1, # 102B; Fig. 5, # 502 – user interface module) the image of the subject person is being generated while the subject person is viewing a content or after the subject person views the content (Par. [0080] – the user device may be a smart phone with a camera or a personal computer with a camera; Par. [0097] – the user interface module comprises components (such as a display screen) to interact with a user 101B to present media (e.g., audio/video calls) – subject person is capable of viewing a content when image is generated (i.e., while on a video call with provider or other user); Par. [0103]). Therefore, claim 2 is unpatentable over Mishra, et al. Regarding claim 3, Mishra teaches the communication apparatus according to claim 2, wherein the at least one processor is further configured to perform (Fig. 2) including the image of the subject person in the first information when the state information indicates that the subject person is glad (Par. [0010]; Par. [0086]; Par. [0088]; Par. [0107] – the user data may indicate the user is happy (based on facial expressions) and communicate with the user that the provider will check-in in an hour). Therefore, claim 3 is unpatentable over Mishra, et al. Regarding claim 4, Mishra teaches the communication apparatus according to claim 2, wherein the at least one processor is further configured to perform including the content as at least a part of the first information when the state information indicates that the subject person is glad (Par. [0088] – video/image data associated with the patient indicates the patient’s mood is happy. A server may determine that the operation to perform is to gather additional information/verbally check-in with the patient sent through the monitoring device.). Therefore, claim 4 is unpatentable over Mishra, et al. Regarding claim 5, Mishra teaches the communication apparatus according to claim 1, wherein (Fig. 5, # 514) the state information is generated using a machine learning model (Par. [0005] – machine learning, Par. [0029]; Par. [0083] – the server 110 may utilize technologies, such as Artificial Intelligence, especially Deep Learning, Image Recognition/facial recognition, and Natural Language Processing to intelligently detect the state of the user 101A/101B; Par. [0100]). Therefore, claim 5 is unpatentable over Mishra, et al. Regarding claim 6, Mishra teaches (Fig. 1, # 100) a communication method (Par. [0077-0078] – establishing communication between a user 101A and a patient 101B) comprising, by a computer: (Fig. 1, # 103B; Fig. 3A, 301B) analyzing an image of a subject person to generate state information indicating a state of the subject person (Par. [0010] – the processor executes a module responsible for analyzing video data received from a user device to determine a mood of the user (e.g., neutral, happy, distressed); Par. [0079] – each of the user devices 102 may include the camera 103B to capture images/video to determine facial expression, mood, body position, movement, etc.; Par. [0088]); determining, by using the state information, first information to be transmitted to a first terminal (Par. [0005] – this information (e.g., information from image capturing device) can be used to determine a state of the user/issue and to automatically notify the doctors/nurses to setup a live call/scheduled call based on the urgency on the patient condition; Par. [0026-0027]; Par. [0090]); and (Fig. 2) transmitting, when a second terminal receives an input for permitting the second terminal to communicate with the first terminal, the first information from the second terminal to the first terminal (Par. [0005]; Par. [0011-0012] – using a wake word (i.e., input) to connect to the provider (i.e., permitting communication from a patient (i.e., second terminal) to the provider (i.e., first provider) based on natural language processing of the wake word by the user (i.e., input), Par. [0026-0028]; Par. [0060] – human input (i.e., consent) before performance of the process or operation, Par. [0086]; Par. [0090]). Therefore, claim 6 is unpatentable over Mishra, et al. Regarding claim 7, Mishra teaches the communication method according to claim 6, wherein (Fig. 1, # 102B; Fig. 5, # 502 – user interface module) the image of the subject person is being generated while the subject person is viewing a content or after the subject person views the content (Par. [0080] – the user device may be a smart phone with a camera or a personal computer with a camera; Par. [0097] – the user interface module comprises components (such as a display screen) to interact with a user 101B to present media (e.g., audio/video calls) – subject person is capable of viewing a content when image is generated (i.e., while on a video call with provider or other user); Par. [0103]). Therefore, claim 7 is unpatentable over Mishra, et al. Regarding claim 8, Mishra teaches the communication method according to claim 7, further comprising, by the computer, (Fig. 2) including the image of the subject person in the first information when the state information indicates that the subject person is glad (Par. [0010]; Par. [0086]; Par. [0088]; Par. [0107] – the user data may indicate the user is happy (based on facial expressions) and communicate with the user that the provider will check-in in an hour). Therefore, claim 8 is unpatentable over Mishra, et al. Regarding claim 9, Mishra teaches the communication method according to claim 7, further comprising, by the computer, including the content as at least a part of the first information when the state information indicates that the subject person is glad (Par. [0088] – video/image data associated with the patient indicates the patient’s mood is happy. A server may determine that the operation to perform is to gather additional information/verbally check-in with the patient sent through the monitoring device.). Therefore, claim 9 is unpatentable over Mishra, et al. Regarding claim 10, Mishra teaches (Fig. 1, # 110) a non-transitory computer-readable storage medium storing a program causing a computer (Par. [0009]; Par. [0033]; Par. [0044]; Par. [0082] – the server 110 may have or utilize the database 112 as a non-transitory repository of data accessible to at least one processor of the server 110) to perform: (Fig. 1, # 103B; Fig. 3A, 301B) analyzing an image of a subject person to generate state information indicating a state of the subject person (Par. [0010] – the processor executes a module responsible for analyzing video data received from a user device to determine a mood of the user (e.g., neutral, happy, distressed); Par. [0079] – each of the user devices 102 may include the camera 103B to capture images/video to determine facial expression, mood, body position, movement, etc.; Par. [0088]); determining, by using the state information, first information to be transmitted to a first terminal (Par. [0005] – this information (e.g., information from image capturing device) can be used to determine a state of the user/issue and to automatically notify the doctors/nurses to setup a live call/scheduled call based on the urgency on the patient condition; Par. [0026-0027]; Par. [0090]); and (Fig. 2) transmitting, when a second terminal receives an input for permitting the second terminal to communicate with the first terminal, the first information from the second terminal to the first terminal (Par. [0005]; Par. [0011-0012] – using a wake word (i.e., input) to connect to the provider (i.e., permitting communication from a patient (i.e., second terminal) to the provider (i.e., first provider) based on natural language processing of the wake word by the user (i.e., input), Par. [0026-0028]; Par. [0060] – human input (i.e., consent) before performance of the process or operation, Par. [0086]; Par. [0090]). Therefore, claim 10 is unpatentable over Mishra, et al. Regarding claim 11, Mishra teaches the non-transitory computer-readable storage medium according to claim 10, wherein (Fig. 1, # 102B; Fig. 5, # 502 – user interface module) the image of the subject person is being generated while the subject person is viewing a content or after the subject person views the content (Par. [0080] – the user device may be a smart phone with a camera or a personal computer with a camera; Par. [0097] – the user interface module comprises components (such as a display screen) to interact with a user 101B to present media (e.g., audio/video calls) – subject person is capable of viewing a content when image is generated (i.e., while on a video call with provider or other user); Par. [0103]). Therefore, claim 11 is unpatentable over Mishra, et al. Regarding claim 12, Mishra teaches the non-transitory computer-readable storage medium according to claim 11, wherein the program causes the computer to perform: (Fig. 2) including the image of the subject person in the first information when the state information indicates that the subject person is glad (Par. [0010]; Par. [0086]; Par. [0088]; Par. [0107] – the user data may indicate the user is happy (based on facial expressions) and communicate with the user that the provider will check-in in an hour). Therefore, claim 12 is unpatentable over Mishra, et al. Regarding claim 13, Mishra teaches the non-transitory computer-readable storage medium according to claim 11, wherein the program causes the computer to perform: including the content as at least a part of the first information when the state information indicates that the subject person is glad (Par. [0088] – video/image data associated with the patient indicates the patient’s mood is happy. A server may determine that the operation to perform is to gather additional information/verbally check-in with the patient sent through the monitoring device.). Therefore, claim 13 is unpatentable over Mishra, et al. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Hirasawa, et al. (U.S. PGPub No. 2021/0097488) Zhou (U.S. PGPub No. 2019/0216334) Manabe, et al. (U.S. PGPub No. 2017/0311864) Lee (U.S. PGPub No. 2020/0411154) Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL TAYLOR HOLTZCLAW whose telephone number is (571)272-6626. The examiner can normally be reached Monday-Friday (7:30 a.m.-5:00 p.m. EST). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jennifer McDonald can be reached at (571) 270-3061. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MICHAEL T. HOLTZCLAW/Examiner, Art Unit 3796
Read full office action

Prosecution Timeline

Dec 19, 2023
Application Filed
Nov 25, 2025
Non-Final Rejection — §101, §102, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12589027
METHOD FOR CENTERING A CONTACT GLASS AND REFRACTIVE SURGICAL LASER SYSTEM
2y 5m to grant Granted Mar 31, 2026
Patent 12569369
SYSTEM FOR LASER-BASED AMETROPIA CORRECTION, AND METHOD FOR THE ALIGNMENT THEREOF
2y 5m to grant Granted Mar 10, 2026
Patent 12569694
ADJUSTABLE LEAD SYSTEMS FOR CARDIAC SEPTAL WALL IMPLANTATION
2y 5m to grant Granted Mar 10, 2026
Patent 12564517
Avoiding Blood Vessels During Direct Selective Laser Trabeculoplasty
2y 5m to grant Granted Mar 03, 2026
Patent 12564515
DOCKING AN EYE FOR OPHTHALMIC LASER TREATMENT
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
78%
Grant Probability
92%
With Interview (+14.4%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 223 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month