Prosecution Insights
Last updated: April 19, 2026
Application No. 18/515,917

UTILIZING USER RESPONSES IN AUTOMATED CORPUS LABELLING

Final Rejection §103
Filed
Nov 21, 2023
Examiner
XIAO, DI
Art Unit
2178
Tech Center
2100 — Computer Architecture & Software
Assignee
International Business Machines Corporation
OA Round
2 (Final)
77%
Grant Probability
Favorable
3-4
OA Rounds
3y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
463 granted / 600 resolved
+22.2% vs TC avg
Strong +22% interview lift
Without
With
+21.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
24 currently pending
Career history
624
Total Applications
across all art units

Statute-Specific Performance

§101
8.2%
-31.8% vs TC avg
§103
57.6%
+17.6% vs TC avg
§102
17.1%
-22.9% vs TC avg
§112
14.2%
-25.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 600 resolved cases

Office Action

§103
DETAILED ACTION In Applicant’s Response dated 12/16/2025, Applicant amended claims 1 to 20; and argued against all rejections previously set forth in the Office action dated 10/20/2025. Response to Argument Applicant’s arguments were considered, but are moot in view of the new ground(s) of rejection. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 6, 10, 15, 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Joshi, Pub. No.: 2019/0278797A1, in view of Chessman, Pub. No.: 20180315063 A1, and further in view of KAWAMOTO, 20240330419 A1. With regard to claim 1: Joshi discloses a computer-implemented method, comprising: identifying, by a processor set, a time to inject an image and a label (see fig. 3C and 3D for image and labels such as “Glider”, “Me”, “Ski”, “Blue Sky”, paragraph 37: “In an example, a plurality of tags “Glider”, “Me”, “Ski”, “Blue Sky” and “Snow Peak” are generated for a plurality of objects marked by the rectangles and displayed on a photosphere as shown in FIG. 3C.”) within a virtual reality environment or an augmented reality environment (wherein the time is the time based on the user input to browse images, paragraph 27: “The present disclosure provides some examples for processing images in a VR system. FIG. 1 shows a VR system 100 capable of presenting a photographic image to a user. In an example, the VR system 100 may include a terminal device 101 running a VR client and a VR device 102 operated and worn and/or hold by a user. The terminal device 101 stores the photographic image in local, and presents (e.g., displays) the photographic image via the VR client through a VR User Interface (UI), so that the user is able to browse the photographic image through the VR UI via the VR device 102. In another example, the VR system 100 may further include a server device 103 running a VR server. The server device 103 stores the photographic image. The terminal device 101 obtains the photographic image from the server device 103 and presents the photographic image via the VR client through the VR UI.”); injecting, by the processor set, the image and the label (see fig. 3C and 3D for image and labels such as “Glider”, “Me”, “Ski”, “Blue Sky”, paragraph 37: “In an example, a plurality of tags “Glider”, “Me”, “Ski”, “Blue Sky” and “Snow Peak” are generated for a plurality of objects marked by the rectangles and displayed on a photosphere as shown in FIG. 3C.”) within the virtual reality environment or the augmented reality environment at the identified time (wherein the time is the time based on the user input to browse images, paragraph 27: “The present disclosure provides some examples for processing images in a VR system. FIG. 1 shows a VR system 100 capable of presenting a photographic image to a user. In an example, the VR system 100 may include a terminal device 101 running a VR client and a VR device 102 operated and worn and/or hold by a user. The terminal device 101 stores the photographic image in local, and presents (e.g., displays) the photographic image via the VR client through a VR User Interface (UI), so that the user is able to browse the photographic image through the VR UI via the VR device 102. In another example, the VR system 100 may further include a server device 103 running a VR server. The server device 103 stores the photographic image. The terminal device 101 obtains the photographic image from the server device 103 and presents the photographic image via the VR client through the VR UI.”); capturing, by the processor set, a user's response to the injected label and the injected image (the user input indicate whether the label is correct, paragraph 39: “In an example, when the VR client displays the photographic image, the at least one tag is attached on the photographic image, so that the VR client can suggest the at least one tag at Block 203. Then, at Block 204, upon receiving a user instruction, the VR client can determine one or more of the suggested tag that are confirmed by the user according the user instruction. In an example, when the user uses a controller or pointer to point to a tag displayed on the photographic image, the VR client will receive a user instruction (e.g., activation instruction) indicating that the user selects(or activates) the tag to which the controller or pointer points, and then the VR client confirms this tag selected by the user. As shown in FIG. 3D, the user uses a pointer 330 to point inside a rectangular indicating a person object corresponding to a tag “Me”, and the VR client will determine that this tag “Me” is correct and confirm this tag “Me”. In another example, when the user uses a controller or pointer to point to a tag displayed on the photographic image, the VR client will receive a user instruction indicating that the user wants to operate the tag to which the controller or pointer points, then the VR client presents one or more options for the tag on the photographic image, and when the user uses the controller or pointer to point to the option of confirmation, the VR client will receive a user instruction indicating that the user selects(or activates) this tag, and then the VR client confirms this tag selected by the user.”); determining, by the processor set, whether the injected label accurately describes the injected image, based on the user's captured response to the injected label and the injected image (the user input indicate whether the label is correct, paragraph 39: “In an example, when the VR client displays the photographic image, the at least one tag is attached on the photographic image, so that the VR client can suggest the at least one tag at Block 203. Then, at Block 204, upon receiving a user instruction, the VR client can determine one or more of the suggested tag that are confirmed by the user according the user instruction. In an example, when the user uses a controller or pointer to point to a tag displayed on the photographic image, the VR client will receive a user instruction (e.g., activation instruction) indicating that the user selects(or activates) the tag to which the controller or pointer points, and then the VR client confirms this tag selected by the user. As shown in FIG. 3D, the user uses a pointer 330 to point inside a rectangular indicating a person object corresponding to a tag “Me”, and the VR client will determine that this tag “Me” is correct and confirm this tag “Me”. In another example, when the user uses a controller or pointer to point to a tag displayed on the photographic image, the VR client will receive a user instruction indicating that the user wants to operate the tag to which the controller or pointer points, then the VR client presents one or more options for the tag on the photographic image, and when the user uses the controller or pointer to point to the option of confirmation, the VR client will receive a user instruction indicating that the user selects(or activates) this tag, and then the VR client confirms this tag selected by the user.”); and writing, by the processor set, the determination whether the label accurately describes the injected image to a memory (the accuracy is confirmed by the VR client based a specific identified user input, paragraph 47 - 49: “In an example, at Block 204, the VR client confirms any of the at least one tag in response to receiving a first user instruction on the tag. For example, when the user uses a controller or pointer to point at a pixel location on the photosphere, a tag is displayed on the photographic image, the VR client will receive a user instruction indicating that the user selects (or activates) the tag to which the controller or pointer points, and then the VR client confirms this tag selected by the user. And for another example, when the user uses a controller or pointer to point to a tag displayed on the photographic image, the VR client will receive a user instruction indicating that the user wants to operate the tag, and then the VR client presents one or more options for the tag on the photographic image. Here, the VR client may present various options for the tag including options for editing, deleting, and/or confirming the tag and etc. When the user uses the controller or pointer to point to the option of confirmation (i.e., a UI control for confirmation) among the one or more options, the VR client will receive a user instruction indicating that the user confirms to select(i.e., activate) this tag, and then the VR client confirms this tag. In an example, at Block 204, the VR client confirms one or more tags currently presented on the photographic image in response to receiving a user instruction on a UI control for confirmation presented on the photographic image. For example, the VR client presents a UI control (e.g., a button) on the photographic image, when the user uses a controller or pointer to point to the UI control, the VR client will confirm all the tags currently presented on the photographic image. In an example, at Block 204, before confirming a tag, the VR client may further edit the tag according to information inputted by a user in response to receiving a second user instruction on the tag.”). Joshi does not disclose the aspect wherein the user response is a reflexive biometric response. However Chessman discloses the aspect wherein The user response is a reflexive biometric response (paragraph 218 and 219: “As further shown in FIG. 8, the method 800 includes an act 820 of receiving a response to the digital survey question. In particular, act 820 includes receiving, from the client device, a response to the digital survey question comprising a reply to the textual query and biometric data captured by a biometric sensor in response to the biometric query. In one or more embodiments, receiving the response to the digital survey question comprising the reply to the textual query and the biometric data captured in response to the biometric query comprises receiving a data packet that comprises the reply, the biometric data, and the question identifier. Similarly, in one or more embodiments, receiving the response to the digital survey question comprising the reply to the textual query and biometric data captured in response to the biometric query comprises receiving one or more of a blood pressure, breath rate, heart rate, image, video, and audio file as part of the biometric data captured in response to the biometric query.”). It would have been obvious to one of ordinary skill in the art, at the time the filing was made to apply Chessman to Joshi so the system can use a use’s reflexive biometric response to determine the accuracy of the injected label saving time and effort and the reflexive biometric can be fast and more accurate than a intentional input. Joshi and Chessman do not disclose the aspect determining, by the processor set, whether the injected label accurately describes the injected image, based on using a neural net algorithm trained on a plurality of user interactions with respect to the injected image, However KAWAMOTO disclose the aspect determining, by the processor set, whether the injected label accurately describes the injected image, based on using a neural net algorithm trained on a plurality of user interactions (paragraph 81 and 82: “In accordance with an embodiment, the value of the first weighted score may be determined based on a weight assigned (after the second stage) to the first NL query and the output of the first ML model 110 determined based on the application of the first ML model 110 on the first user response. The weight assigned to the first NL query may indicate the level of difficulty of the first NL query (since the weight assigned to the first NL query may be determined based on the level of difficulty associated with the first NL query). The output of the first ML model 110 may indicate the accuracy of the first user response. The first ML model 110 may generate an output based on whether the first ML model 110 is trained to determine a hard accuracy or a soft accuracy of a user response. If the first ML model 110 is trained to determine the hard accuracy, the output may be “0” (if the user response is determined to be inaccurate with respect to the actual NL response) or “1” (if the user response is determined to be accurate with respect to the actual NL response). Whereas, if the first ML model 110 is trained to determine the soft accuracy, the output may be a real number between “0” and “1”. The output may be “0” if the user response is wrong (or incorrect), “1” if the user response is correct, or “between “0” and “1” if the user response is partially correct. For example, the circuitry 202 may determine a match between the first user response (received at 408) and the actual NL response (retrieved at 410) for the first NL query stored in the query database 114. In this scenario, the output of the first ML model 110 may be “1” (based on the match), regardless of whether the first ML model 110 is trained to determine the hard accuracy or the soft accuracy of the first user response. Thus, the second user response may be determined as accurate. The first weighted score may be determined based on the weight assigned to the first NL query since the output of the first ML model 110 is “1”. However, if the first user response is “20/04/2000” and the first ML model 110 is trained to determine a hard accuracy, then the output of the first ML model 110 may be “0” (and the first weighted score may be determined as ‘0”). The output of the first ML model 110 may be a value that lies between “0” and “1” (for example, 0.8) if the first ML model 110 is trained to determine a soft accuracy. The first weighted score may be determined based on the weight assigned to the first NL query and a value of the output of the first ML model 110.”). It would have been obvious to one of ordinary skill in the art, at the time the filing was made to apply KAWAMOTO to Joshi and Chessman so the system can be trained to determine the accuracy of the user’s response to be more informed about the accuracy of the label and act accordingly. With regard to claims 6 and 15: Joshi and Chessman and Kawamoto disclose The computer-implemented method of claim 1, further comprising interacting, by the processor set, with the user within the virtual reality environment or the augmented reality environment, wherein the identifying the time to inject the image and the label is based, at least in part, on an interaction with the user within the virtual reality environment or the augmented reality environment (wherein the time is the time is based the user input to browse images, paragraph 27: “The present disclosure provides some examples for processing images in a VR system. FIG. 1 shows a VR system 100 capable of presenting a photographic image to a user. In an example, the VR system 100 may include a terminal device 101 running a VR client and a VR device 102 operated and worn and/or hold by a user. The terminal device 101 stores the photographic image in local, and presents (e.g., displays) the photographic image via the VR client through a VR User Interface (UI), so that the user is able to browse the photographic image through the VR UI via the VR device 102. In another example, the VR system 100 may further include a server device 103 running a VR server. The server device 103 stores the photographic image. The terminal device 101 obtains the photographic image from the server device 103 and presents the photographic image via the VR client through the VR UI.”). Claim 10 is rejected for the same reason as claim 1. Claim 18 is rejected for the same reason as claim 1. Claim(s) 2, 3, 11, 12, 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Joshi, in view of Chessman and Kawamoto, and further in view of Goel, Pub. No.: 20170004548 A1. With regard to claim 2 and 11: Joshi and Chessman and Kawamoto do not disclose the computer-implemented method of claim 1, further comprising classifying, by the processor set, the user as a subject matter expert in a specific field. However Goel discloses the computer-implemented method of claim 1, and further comprising classifying, by the processor set, the user as a subject matter expert in a specific field (“As already noted, yet another attribute of the trust factor is the service provider's professional expertise. A service provider's professional expertise in their area can be determined based on various attributes, including but not limited to reviews, recommendations, endorsements, level of education, years of experience, credentials, skills, accolades, as the like. Each of these attributes may be assigned predetermined point values, where the points are summed to create a score for the shared affinity and the score is later used to form the trust factor. The predetermined point values may be the same for different types of attributes (e.g., each year of experience may be 1 point, and each positive review may be 1 point), or may be different (e.g., each year of experience may be 1 point, and each positive review may be 2 points, and the like). In some examples, negative reviews subtract points.”). It would have been obvious to one of ordinary skill in the art, at the time the filing was made to apply Goel To Joshi and Chessman and Kawamoto so the user would be able to Goel to Joshi so only users who are subject matter expert are participating to guarantee the accuracy of the identification process. With regard to claim 3 and 12: Joshi and Chessman and Kawamoto and Goel disclose the computer-implemented method of claim 2, wherein the classifying further comprises determining, by the processor set, an area of expertise of the user based, at least in part, on a professional credential of the user (Goel “As already noted, yet another attribute of the trust factor is the service provider's professional expertise. A service provider's professional expertise in their area can be determined based on various attributes, including but not limited to reviews, recommendations, endorsements, level of education, years of experience, credentials, skills, accolades, as the like. Each of these attributes may be assigned predetermined point values, where the points are summed to create a score for the shared affinity and the score is later used to form the trust factor. The predetermined point values may be the same for different types of attributes (e.g., each year of experience may be 1 point, and each positive review may be 1 point), or may be different (e.g., each year of experience may be 1 point, and each positive review may be 2 points, and the like). In some examples, negative reviews subtract points.”). It would have been obvious to one of ordinary skill in the art, at the time the filing was made to apply Goel to Joshi and Chessman and Kawamoto so the user would be able to Goel to Joshi so only users who are specific matter expert are participating to guarantee the accuracy of the identification process. With regard to claim 19: Joshi and Chessman and Kawamoto and Goel disclose the system of claim 18, wherein the program instructions are further executable to classify the user as a subject matter expert in a specific field and wherein the program instructions are further executable to determine an area of expertise of the user based, at least in part, on a professional credential of the user (Goel: “As already noted, yet another attribute of the trust factor is the service provider's professional expertise. A service provider's professional expertise in their area can be determined based on various attributes, including but not limited to reviews, recommendations, endorsements, level of education, years of experience, credentials, skills, accolades, as the like. Each of these attributes may be assigned predetermined point values, where the points are summed to create a score for the shared affinity and the score is later used to form the trust factor. The predetermined point values may be the same for different types of attributes (e.g., each year of experience may be 1 point, and each positive review may be 1 point), or may be different (e.g., each year of experience may be 1 point, and each positive review may be 2 points, and the like). In some examples, negative reviews subtract points.”). It would have been obvious to one of ordinary skill in the art, at the time the filing was made to apply Goel to Joshi and Chessman and Kawamoto so the user would be able to Goel to Joshi so only users who are specific matter expert are participating to guarantee the accuracy of the identification process Claims 4, 5, 13, 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Joshi, in view of Chessman and Kawamoto, and further in view of Esna, Pub. No.: 2025/0091610A1. With regard to claims 4 and 13: Joshi and Chessman and Kawamoto do not disclose the computer-implemented method of claim 1, further comprising determining, by the processor set, whether the user will be compensated for the user's response to the injected label and the injected image. However Esna discloses The computer-implemented method of claim 1, further comprising determining, by the processor set, whether the user will be compensated (paragraph 66: “At Block 430, an image of the object is shown to the user on a HMI display and the user is presented with a multiple choice question regarding the possible classifications of the object. The user is rewarded by selecting a choice of answer for the multiple choice question. Moving to Block 442, if the user rejected all of the multiple choices, then the Method 400 moves to Block 432 and continues therefrom.”) for the user's response to the label and the image (paragraph 62: “Referring back to Block 432, if the user is determined to be credible, then the Method 400 moves to Block 436. At Block 436, an image of the object is shown to the user on a HMI display and the user is asked to annotate the object with a freeform answer. A freeform answer, also referred to as a freeform annotation, means the user is free to provide an annotation of the object without having to choose from a list of predetermined annotation choices. The user is rewarded once the freeform annotation is submitted to the server.”). It would have been obvious to one of ordinary skill in the art, at the time the filing was made to apply Esna to Joshi and Chessman and Kawamoto so the user is compensated for his or her effort for identifying image label in order to attract more users to participate. With regard to claims 5 and 14: Joshi and Chessman and Kawamoto and Esna The computer-implemented method of claim 4, further comprising compensating, by the processor set, the user (Esna paragraph 66: “At Block 430, an image of the object is shown to the user on a HMI display and the user is presented with a multiple choice question regarding the possible classifications of the object. The user is rewarded by selecting a choice of answer for the multiple choice question. Moving to Block 442, if the user rejected all of the multiple choices, then the Method 400 moves to Block 432 and continues therefrom.”) based on the user's response to the label and the image (paragraph 62: “Referring back to Block 432, if the user is determined to be credible, then the Method 400 moves to Block 436. At Block 436, an image of the object is shown to the user on a HMI display and the user is asked to annotate the object with a freeform answer. A freeform answer, also referred to as a freeform annotation, means the user is free to provide an annotation of the object without having to choose from a list of predetermined annotation choices. The user is rewarded once the freeform annotation is submitted to the server.”). It would have been obvious to one of ordinary skill in the art, at the time the filing was made to apply Esna to Joshi and Chessman and Kawamoto so the user is compensated for his or her effort for identifying image label in order to attract more users to participate. Claim 7 and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Joshi, and further in view of Chessman and Kawamoto, and further in view of Kelly, EP 3945459 A1. With regard to claims 7 and 16: Joshi and Chessman and Kawamoto do not disclose the computer-implemented method of claim 1, further comprising determining, by the processor set, the degree of statistical significance based, at least in part, on the determination whether the injected label accurately describes the injected image. However, Kelly discloses the aspect of determining, by the processor set, the degree of statistical significance based, at least in part, on the determination whether the injected label accurately describes the injected image (“In operation 404, the method determines whether the candidate image contains accurate labels for the identified object of interest. For example, the processor 116 receives user input (e.g., clinician input) from a user interface of an application that is rendering the image that indicates whether the label of the object of interest is accurate. If not, the processor 116 may discard the candidate image by deleting the candidate image from memory, or take other action. For example, the processor 116 may take other action by requesting feedback from the user as to an accurate label for the object of interest in the candidate image. The feedback may be audio in nature (e.g., a clinician's voice) and/or tactile in nature (e.g., input on a keyboard or mouse). The processor 116 may then consider the candidate image with the corrected label as being an accurately labeled image for inclusion into a training data set (i.e., for operation 412). In addition, the processor 116 may use the feedback and/or send the feedback to neural network 124 to avoid mislabeling similar objects of interest in other images.”) It would have been obvious to one of ordinary skill in the art, at the time the filing was made to apply Kelly to Joshi and Chessman and Kawamoto and Chessman and Kawamoto so the system can determine whether the user is providing accurate identification of image labels. Claims 8, 9, 17, 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Joshi, Chessman and Kawamoto, in view of Campbell, CN 108496285 A With regard to claims 8: Joshi and Chessman and Kawamoto do not disclose the computer-implemented method of claim 1, wherein the user's response comprises brainwave data of the user. However Campbell discloses the aspect wherein the user's response comprises brainwave data of the user (“Here is described the increased user to enjoy the sound device and method, by a personalized audio signal so that the user perceived audio signal as if the user has perfect hearing and/or desired audio. In one embodiment, the earphone of the headset user comprises sensor and a loudspeaker. when the speaker playing audio signals to the user, sensor records the user response to the audio signal. the sensor may be a microphone, a brainwave sensor, electroencephalogram (EEG) sensor or the like. in response to user can be audio response in the ear of the user, associated with the user of the brain response, skin reactions associated with the user, etc. based on the corresponding measured, and based on how the perceptual knowledge of people sound, the audio signal by modifying the difference between hearing and/or desired hearing of hearing and the ideal compensation user, thereby increasing user enjoyment of the sound.”). It would have been obvious to one of ordinary skill in the art, at the time the filing was made to apply Campbell to Joshi and Chessman and Kawamoto so brainwave data can used to help determine the user’s identification of image labels wherein the brainwave can provide additional emotional response and provide more accurately user determination. With regard to claims 9 and 17: Joshi and Chessman and Kawamoto and Campbell disclose the computer-implemented method of claim 8, wherein the brainwave data of the user is captured using one or more devices selected from a group consisting of an electroencephalogram cap, an electroencephalogram headset, earphones configured to measure the electrical activity of a brain of the user, and earmuffs configured to measure the electrical activity of the user's brain (Campbell, “Here is described the increased user to enjoy the sound device and method, by a personalized audio signal so that the user perceived audio signal as if the user has perfect hearing and/or desired audio. In one embodiment, the earphone of the headset user comprises sensor and a loudspeaker. when the speaker playing audio signals to the user, sensor records the user response to the audio signal. the sensor may be a microphone, a brainwave sensor, electroencephalogram (EEG) sensor or the like. in response to user can be audio response in the ear of the user, associated with the user of the brain response, skin reactions associated with the user, etc. based on the corresponding measured, and based on how the perceptual knowledge of people sound, the audio signal by modifying the difference between hearing and/or desired hearing of hearing and the ideal compensation user, thereby increasing user enjoyment of the sound.”). It would have been obvious to one of ordinary skill in the art, at the time the filing was made to apply Campbell to Joshi and Chessman and Kawamoto so brainwave data can used to help determine the user’s identification of image labels wherein the brainwave can provide additional emotional response and provide more accurately user determination. With regard to claim 20: Joshi and Chessman and Kawamoto and Campbell disclose the aspect wherein the user's response comprises brainwave data of the user and wherein the brainwave of the user is captured using one or more devices selected from a group consisting of an electroencephalogram cap, an electroencephalogram headset, earphones configured to measure the electrical activity of a user's brain, and earmuffs configured to measure the electrical activity of the user's brain (Campbell, “Here is described the increased user to enjoy the sound device and method, by a personalized audio signal so that the user perceived audio signal as if the user has perfect hearing and/or desired audio. In one embodiment, the earphone of the headset user comprises sensor and a loudspeaker. when the speaker playing audio signals to the user, sensor records the user response to the audio signal. the sensor may be a microphone, a brainwave sensor, electroencephalogram (EEG) sensor or the like. in response to user can be audio response in the ear of the user, associated with the user of the brain response, skin reactions associated with the user, etc. based on the corresponding measured, and based on how the perceptual knowledge of people sound, the audio signal by modifying the difference between hearing and/or desired hearing of hearing and the ideal compensation user, thereby increasing user enjoyment of the sound.”). It would have been obvious to one of ordinary skill in the art, at the time the filing was made to apply Campbell to Joshi and Chessman and Kawamoto so brainwave data can used to help determine the user’s identification of image labels wherein the brainwave can provide additional emotional response and provide more accurately user determination. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DI XIAO whose telephone number is (571)270-1758. The examiner can normally be reached 9Am-5Pm est M-F. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Stephen Hong can be reached at (571) 272-4124. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DI XIAO/Primary Examiner, Art Unit 2178
Read full office action

Prosecution Timeline

Nov 21, 2023
Application Filed
Oct 16, 2025
Non-Final Rejection — §103
Dec 15, 2025
Applicant Interview (Telephonic)
Dec 16, 2025
Response Filed
Dec 18, 2025
Examiner Interview Summary
Feb 05, 2026
Final Rejection — §103
Mar 10, 2026
Interview Requested
Mar 31, 2026
Applicant Interview (Telephonic)
Apr 02, 2026
Examiner Interview Summary
Apr 09, 2026
Response after Non-Final Action

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12599341
AUTONOMOUS, CONSENT DRIVEN AND GENERATIVE DEVICE, SYSTEM AND METHOD THAT PROMOTES USER PRIVACY, SELF-KNOWLEDGE AND WELL-BEING
2y 5m to grant Granted Apr 14, 2026
Patent 12597519
METHODS FOR CHARACTERIZING AND TREATING A CANCER TYPE USING CANCER IMAGES
2y 5m to grant Granted Apr 07, 2026
Patent 12588967
PRESENTATION OF PATIENT INFORMATION FOR CARDIAC SHUNTING PROCEDURES
2y 5m to grant Granted Mar 31, 2026
Patent 12586456
SYSTEMS AND METHODS FOR PROVIDING SECURITY SYSTEM INFORMATION USING AUGMENTED REALITY EFFECTS
2y 5m to grant Granted Mar 24, 2026
Patent 12579773
DISPLAY APPARATUS AND DISPLAY METHOD
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
77%
Grant Probability
99%
With Interview (+21.7%)
3y 4m
Median Time to Grant
Moderate
PTA Risk
Based on 600 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month