Prosecution Insights
Last updated: April 19, 2026
Application No. 17/456,243

ASSISTED COLLABORATIVE NAVIGATION IN SCREEN SHARE ENVIRONMENTS

Non-Final OA §103
Filed
Nov 23, 2021
Examiner
TAN, DAVID H
Art Unit
2145
Tech Center
2100 — Computer Architecture & Software
Assignee
International Business Machines Corporation
OA Round
8 (Non-Final)
31%
Grant Probability
At Risk
8-9
OA Rounds
4y 1m
To Grant
46%
With Interview

Examiner Intelligence

Grants only 31% of cases
31%
Career Allow Rate
30 granted / 98 resolved
-24.4% vs TC avg
Strong +16% interview lift
Without
With
+15.8%
Interview Lift
resolved cases with interview
Typical timeline
4y 1m
Avg Prosecution
41 currently pending
Career history
139
Total Applications
across all art units

Statute-Specific Performance

§101
8.5%
-31.5% vs TC avg
§103
63.5%
+23.5% vs TC avg
§102
19.8%
-20.2% vs TC avg
§112
6.7%
-33.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 98 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment This Non-Final Rejection is filed in response to Notice of Appeal filed 01/02/2026. Claims 1-20 remain pending. Response to Arguments Argument 1, Applicant argues in Notice of Appeal filed 01/02/2026, pg. 2-5 that Janakiraman does not disclose Client-side Processing. Response to Argument 1, applicant’s arguments have been considered and in light of the amendments a newly found combination of prior art (U.S. Patent Application Publication NO. 20200313916 “Janakiraman”, in light of U.S. Patent NO. 10,445,051 “Subash”, and further in light of U.S. Patent Application Publication NO. 20150142447 “Kennewick”) is applied to updated rejections. However the examiner notes that Janakiraman at least suggests that a user device may be an example computer system and that an example computer system may implement any of the one or more of the methods, tools, and modules, and any related functions, described herein. Thus it is possible for a user device to implement the above described NLP module on a local device. It is noted that Janakiraman lacks an explicit recitation on why it would be advantageous to have a local NLP module. This is supported by the following paragraphs of Janakiraman [0026], “The user devices 125 may be any type of computer system and may be substantially similar to computer system 1101 of FIG. 5 [0028], In embodiments, the virtual conference assistant 102 may be a standalone device or located on another device, such as database 130 [0059], teaches, “Referring now to FIG. 5, shown is a high-level block diagram of an example computer system 1101 that may be used in implementing one or more of the methods, tools, and modules, and any related functions, described herein. [0093], Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 3, 5, 8-9, 11, 13, 16-17, & 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication NO. 20200313916 “Janakiraman”, in light of U.S. Patent NO. 10,445,051 “Subash”, and further in light of U.S. Patent Application Publication NO. 20150142447 “Kennewick”. Claim 1: Janakiraman teaches a computer-implemented method, comprising: (i.e. para. [0017, 0026], “The system may determine that the command given has instructed the users to utilize a UI element (e.g., mute icon, screen share icon, chat box, etc.)… the user devices 125 are operated by one or more users within the virtual conferencing session 120. The user devices 125 may be any type of device (e.g., computer, smartphone, tablet, etc.) configured to communicatively connect to the virtual conferencing session”, wherein it is noted that any user connected to a conferencing session may be a first user with a screen with a plurality of UI elements ) Performing, by the computing device (i.e. i.e. para. [0026, 0059], “The user devices 125 may be any type of computer system and may be substantially similar to computer system 1101 of FIG. 5… Referring now to FIG. 5, shown is a high-level block diagram of an example computer system 1101 that may be used in implementing one or more of the methods, tools, and modules, and any related functions”, wherein it is noted that NLP module may be a module located on a first user’s computing system), natural language processing of a feedback received from the second user to the computing device of the first user (i.e. para. [0029], “the NLP module 106 may detect the words “mute your line” when spoken during the virtual conferencing session 120. The NLP module 106 may correlate the word “mute” with a UI action to use the mute button retrieved from a UI element dictionary 110”, wherein feedback from a second user 125B may be translated to a first user 125A); in response to performing the natural language processing of the feedback received from the second user, identifying, by the computing device (i.e. para. [0031], “The UI element dictionary may include various user guidance to provide users assistance on interacting with a respective UI element. In embodiments, the guidance may include various text, descriptions, images, video, and/or audio media depending on the type of challenges a user may be experiencing. For example, mute icon guidance may include various videos on how to find the mute icon. The system may retrieve this video when the UI action mute is detected from a phrase uttered by a user (e.g., please mute your line) and another user is determined to be experiencing difficulty in finding the UI element”); determining, by the computing device (i.e. para. [0044-0045], “If the UI action data threshold has not been met, “No” at step 225, the process 200 will return to monitoring the virtual conferencing session at step 205. For example, if an average level user typically takes less than 5 mouse clicks to access the mute button, the UI action data threshold may be set to 7 mouse clicks. In this way, the UI action data threshold would not be met for average level users and the system will continue to monitor the virtual conferencing session for other commands…. If the data threshold has been met, “Yes” at step 225, the process 200 will continue by determining that the one or more users are experiencing difficulty in locating the UI element. This is illustrated at step 230. For example, if a beginner level user has exceeded the data threshold of 7 mouse clicks, the system will determine that the user is experiencing difficulty); and translating, by the computing device natural language instruction extracted using the natural language processing of the feedback from the second user into a UI guidance displayed, on the first user screen, wherein the UI guidance is configured to assist the first user to perform the computing input action (i.e. para. [0048], The system may provide various data (e.g., text, images, video, audio) and/or guidance to aid the user experiencing difficulty in finding the appropriate UI element. In embodiments, the system may provide captured screenshots, audio, or video snippets relevant to the UI action… the UI element displayed on the user's screen may change to help guide the user to the UI element. Example changes to the UI element include, but are not limited to, the UI element getting larger, changing color, blinking, and/or including an arrow or other indicator that moves from the location of the user's cursor to the UI element). While Janakiraman teaches a framework for a first user receiving an instruction from a second user, decoded by a NLP module, which references a UI element and executes UI guidance translated into displayed assistance and guidance to a first user, Janakiraman may not explicitly teach sharing, by a computing device of a first user, a first user screen of the computing device with a second user, Performing, by the computing device of the first user, natural language processing; identifying, by the computing device of a first user, a natural language instruction determining, by the computing device of the first user; translating, by the computing device of the first user, an intent of the identified natural language instruction; However, Subash teaches sharing, by a computing device of a first user, a first user screen of the computing device with a second user (i.e. Col. 2, lines 15-25, When the user launches a customer support application on the tablet computing device 100 and initiates a screen sharing function, data representing the user interface content 110 currently being displayed on the tablet computing device 100 is transmitted to the support agent computing device 150). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to add sharing, by a computing device of a first user, a first user screen of the computing device with a second user, to Janakiraman’s UI guidance, with a first user may share their screen with a second user in order to receive guidance, as taught by Subash. One would have been motivated to combine Subash with Janakiraman and would have had a reasonable expectation of success in order to create a more effective to communicate and solve a problem being experienced. While Janakiraman-Subash teach a computing device capable of performing NLP, identifying NLP instructions, determining that the instruction is not being followed, and subsequently translating the intent of the NLP into instructions to a first user and strongly implies that a computing device of a first user is capable of executing such an NLP module functions, Janakiraman-Subash may not explicitly teach Performing, by the computing device of the first user, natural language processing; identifying, by the computing device of a first user, a natural language instruction determining, by the computing device of the first user, translating, by the computing device of the first user, an intent of the identified natural language instruction. However, Kennewick teaches Performing, by the computing device of the first user, natural language processing (i.e. para. [0038], each device 210 may have various services or applications associated therewith, and may perform various aspects of natural language processing locally. [ 0030], the intent determination engine 130a may determine whether to process a given input locally (e.g., when the device 100 has intent determination capabilities that suggest favorable conditions for recognition)); identifying, by the computing device of a first user, a natural language instruction (i.e. para. [0048], an intent of the multi-modal natural language input may be determined at the input device using local natural language processing capabilities and resources) determining, by the computing device of the first user, (i.e. para. [0048], As such, the input device may attempt to determine a best guess as to an intent of the user that provided the input, such as identifying a conversation type (e.g., query, didactic, or exploratory) or request that may be contained in the input (e.g., a command or query relating to one or more domain agents or application domains)); translating, by the computing device of the first user, an intent of the identified natural language instruction (i.e. para. [0049], When the intent determination meets the acceptable level confidence, processing may proceed directly to an operation 380 where action may be taken in response thereto. For example, when the intent determination indicates that the user has requested certain information, one or more queries may be formulated to retrieve the information) It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to add Performing, by the computing device of the first user, natural language processing; identifying, by the computing device of a first user, a natural language instruction; determining, by the computing device of the first user; translating, by the computing device of the first user, an intent of the identified natural language instruction, to Janakiraman-Subash-Kennewick’s UI guidance, with how a NLP module may process, identify, and execute user intent on a local device using local resources, as taught by Kennewick. One would have been motivated to combine the decentralized NLP modules of Kennewick with the intent monitoring and inaction thresholds of Janakiraman-Subash and would have had a reasonable expectation of success in order to determine the intent of a natural language input without assistance from the central device or the secondary devices, communications and processing resources may be conserved by taking immediate action as may be appropriate [Kennewick, para. 0050]. Claim 3: Janakiraman. Subash, and Kennewick teach the method of claim 1. Janakiraman further teaches wherein the performing the natural language processing of the feedback received from the second user to the computing device of the first user further comprises: capturing an audio input stream from the second user (i.e. para. [0040], a user in the virtual conferencing session may say the phrase “please go on mute” to the other participants in the conference); transcribing the captured audio input stream to a text data (i.e. para. [0040], The system will detect the keyword “mute” through natural language processing); and classifying the text data as including an action criteria (i.e. para. [0040], and determine that the word corresponds to muting audio using a mute button icon (e.g., UI element)). Claim 5: Janakiraman. Subash, and Kennewick teach the method of claim 1. Janakiraman further teaches determining that the at least one UI element instructed for manipulation in the identified natural language instructions is not being manipulated to perform the computing input action using the computing device: retrieving a triggering threshold associated with the first user; and determining that the computing action is not performed within a timeframe indicated by the retrieved triggering threshold (i.e. para. [0019], Once the system determines one or more users have failed to locate a respective UI element within the required threshold, the system will become an active participant on the respective user's UI and provide assistance to the one or more users experiencing difficulty). Claim 8: Janakiraman. Subash, and Kennewick teach the method of claim 1. Janakiraman further teaches wherein the UI guidance displayed on the first user screen further comprises: graphically annotating a portion of the first user screen to assist the first user to move a cursor to the graphically annotated portion, wherein the graphically annotated portion is associated with an area of the first user screen indicated by the identified natural language instruction (i.e. para. [0048], Fig. 3, Example changes to the UI element include, but are not limited to, the UI element getting larger, changing color, blinking, and/or including an arrow or other indicator that moves from the location of the user's cursor to the UI element). Claim 9: Claim 9 is the system claim reciting similar limitations to claim 1 and is rejected for similar reasons. Claim 11: Claim 11 is the system claim reciting similar limitations to claim 3 and is rejected for similar reasons. Claim 13: Claim 13 is the system claim reciting similar limitations to claim 5 and is rejected for similar reasons. Claim 16: Claim 16 is the system claim reciting similar limitations to claim 8 and is rejected for similar reasons. Claim 17: Claim 17 is the computer program product claim reciting similar limitations to claim 1 and is rejected for similar reasons. Claim 19: Claim 19 is the computer program product claim reciting similar limitations to claim 3 and is rejected for similar reasons. Claim(s) 2, 10, & 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication NO. 20200313916 “Janakiraman” in light of U.S. Patent NO. 10,445,051 “Subash”, and further in light of U.S. Patent Application Publication NO. 20150142447 “Kennewick”. and further in light of U.S. Patent Application Publication NO. 20180213358 “Shen”. Claim 2: Janakiraman. Subash, and Kennewick teach the method of claim 1. Janakiraman. Subash, and Kennewick may not explicitly teach receiving an approval from the first user indicating the second user as an authorized speaker; and processing the feedback transmitted from the second user based on the received approval. However, Shen teaches: receiving an approval from the first user indicating the second user as an authorized speaker (i.e. para. [0091], “the first terminal initiates the help request to the second terminal… the user of the second terminal selects the acceptance button”, wherein the BRI for approval encompasses the first user initiating a connection with a second user, who is noted in para. [0011] as capable of providing audio feedback); and processing the feedback transmitted from the second user based on the received approval (i.e. para. [0011], “Optionally, the prompt information includes at least one piece of the following information: … audio information used to indicate the position of the target point or indicate the specific path to the target point”, wherein the second user may speak audio information as feedback to the first user’s request for help). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to add s receiving an approval from the first user indicating the second user as an authorized speaker; and processing the feedback transmitted from the second user based on the received approval, to Janakiraman-Subash-Kennewick’s UI guidance and screen sharing, with receiving an approval from the first user indicating the second user as an authorized speaker; and processing the feedback transmitted from the second user based on the received approval, as taught by Shen. One would have been motivated to combine Shen with Janakiraman and would have had a reasonable expectation of success in order to give more accurate prompt information used for helping the help seeker. Claim 10: Claim 10 is the system claim reciting similar limitations to claim 2 and is rejected for similar reasons. Claim 18: Claim 18 is the computer program product claim reciting similar limitations to claim 2 and is rejected for similar reasons. Claim(s) 4, 6-7, 12, 14-15, & 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication NO. 20200313916 “Janakiraman” in light of U.S. Patent NO. 10,445,051 “Subash”, and further in light of U.S. Patent Application Publication NO. 20150142447 “Kennewick”. and further in light of U.S. Patent Application Publication NO. 20060197746 “Nirhamo”. Claim 4: Janakiraman. Subash, and Kennewick teach the method of claim 1. Janakiraman further teaches tracking at least (i.e. para. [0045], If the data threshold has been met, “Yes” at step 225, the process 200 will continue by determining that the one or more users are experiencing difficulty in locating the UI element. This is illustrated at step 230. For example, if a beginner level user has exceeded the data threshold of 7 mouse clicks, the system will determine that the user is experiencing difficulty). Janakiraman. Subash, and Kennewick may not explicitly teach tracking at least one cursor movement of the computing device associated with the first user. However, Nirhamo teaches tracking at least one cursor movement of the computing device associated with the first user (i.e. para. [0030], “As FIG. 3B illustrates, when compared to FIG. 3A, the cursor 302 has been moved to indicate the `language options` selection component. The navigation guidance has been rearranged accordingly”, wherein once it is determined that identified action of moving a cursor to a desired UI element is completed, the navigation guidance rearranges in preparation for another user action). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to add tracking at least one cursor movement of the computing device associated with the first user, to Janakiraman-Subash-Kennewick’s screen sharing and assistance methods regarding tracking the completion of a task identified in an NLP instruction, with tracking at least one cursor movement of the computing device associated with the first user to determine if the identified action is being performed on the computing device as taught by Nirhamo. One would have been motivated to combine Nirhamo with Janakiraman-Subash-Kennewick and would have had a reasonable expectation of success in order to further assist a user in the selection of a particular element. Claim 6: Janakiraman. Subash, and Kennewick teach the method of claim 1. While Janakiraman teaches the UI guidance displayed on the first user screen, including an arrow or other indicator (i.e. para. [0048], Example changes to the UI element include, but are not limited to, the UI element getting larger, changing color, blinking, and/or including an arrow or other indicator), Janakiraman may not explicitly teach wherein the UI guidance further comprises: displaying a quad arrow, wherein each directional arrow of the displayed quad arrow is associated with a respective direction on the first user screen; and highlighting the directional arrow associated with guiding the first user to move a cursor from a current cursor position towards the respective direction of the highlighted directional arrow. However, Nirhamo teaches displaying a quad arrow, wherein each directional arrow of the displayed quad arrow is associated with a respective direction on the first user screen (i.e. para. [0031], FIGS. 3A and 3B, “the navigation guidance displays the paths from the selection component the cursor 302 currently indicates to the selection components directly connected to the selection component the cursor 302 currently indicates”, wherein the BRI for a quad arrow encompasses how navigation guidance in the directions of the Setup, Trailers, Language options, and Behind the scenes are four respective directions a user can move their cursor); and highlighting the directional arrow associated with guiding the first user to move a cursor from a current cursor position towards the respective direction of the highlighted directional arrow (i.e. para. [0031], The components directly connected to the selection component a cursor 402 currently indicates, as well as corresponding paths, may be highlighted to indicate better where the cursor 402 may be moved with a single activation of a direction key of the input device). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to add displaying a quad arrow, wherein each directional arrow of the displayed quad arrow is associated with a respective direction on a first user screen; and highlighting the directional arrow associated with guiding the first user to move a cursor from a current cursor position towards the respective direction of the highlighted directional arrow, to Janakiraman-Subash-Kennewick’s screen sharing and assistance methods with displaying a quad arrow, wherein each directional arrow of the displayed quad arrow is associated with a respective direction on a first user screen; and highlighting the directional arrow associated with guiding the first user to move a cursor from a current cursor position towards the respective direction of the highlighted directional arrow as taught by Nirhamo. One would have been motivated to combine Nirhamo with Janakiraman-Subash-Kennewick and would have had a reasonable expectation of success in order to further assist a user in the selection of a particular element. Claim 7: Janakiraman. Subash, and Kennewick teach the method of claim 1. While Janakiraman teaches the UI guidance displayed on the first user screen with regards to an intent of the identified natural language instruction (i.e. para. [0022], The system may analyze these commands and correlate the commands with UI actions to update user guidance for providing assistance to inexperienced users), Janakiraman may not explicitly teach wherein the UI guidance further comprises: automatically repositioning a cursor on a first user screen from a current position to a specific UI element on the first user screen based on the intent identified natural language instruction. However, Nirhamo teaches wherein displaying the UI guidance further comprises: automatically repositioning a cursor on a first user screen from a current position to a specific UI element on the first user screen based on the intent identified (i.e. para. [0030], “As FIG. 3B illustrates, when compared to FIG. 3A, the cursor 302 has been moved to indicate the `language options` selection component”, wherein the cursor is automatically repositioned from a current Soundtrack position to a specific Language Options element based on the instruction of using a left direction key). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to add automatically repositioning a cursor on a first user screen from a current position to a specific UI element on the first user screen based on the intent identified instruction, to Janakiraman-Subash-Kennewick’s screen sharing and assistance methods with automatically repositioning a cursor on a first user screen from a current position to a specific element on the first user screen based on the identified computing action as taught by Nirhamo. One would have been motivated to combine Nirhamo with Janakiraman-Subash-Kennewick and would have had a reasonable expectation of success in order to further assist a user in the selection of a particular element. Claim 12: Claim 12 is the system claim reciting similar limitations to claim 4 and is rejected for similar reasons. Claim 14: Claim 14 is the system claim reciting similar limitations to claim 6 and is rejected for similar reasons. Claim 15: Claim 15 is the system claim reciting similar limitations to claim 7 and is rejected for similar reasons. Claim 20: Claim 20 is the computer program product claim reciting similar limitations to claim 4 and is rejected for similar reasons. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. U.S. Patent Application NO. 20140136195 “Abdossalami”, teaches in para. [0025] that speech-to-text conversion may alternatively be performed locally at the computing device 100 provided the device has sufficient processing power to perform the conversion. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAVID H TAN whose telephone number is (571)272-7433. The examiner can normally be reached M-F 7:30-4:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Cesar Paula can be reached at (571) 272-4128. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /D.T./Examiner, Art Unit 2145 /CESAR B PAULA/Supervisory Patent Examiner, Art Unit 2145
Read full office action

Prosecution Timeline

Nov 23, 2021
Application Filed
Jul 05, 2022
Non-Final Rejection — §103
Oct 12, 2022
Response Filed
Jan 18, 2023
Final Rejection — §103
Mar 23, 2023
Response after Non-Final Action
Apr 12, 2023
Response after Non-Final Action
Apr 24, 2023
Request for Continued Examination
Apr 27, 2023
Response after Non-Final Action
Aug 31, 2023
Non-Final Rejection — §103
Dec 13, 2023
Response Filed
Mar 21, 2024
Final Rejection — §103
May 22, 2024
Examiner Interview Summary
May 22, 2024
Applicant Interview (Telephonic)
May 24, 2024
Response after Non-Final Action
Jun 25, 2024
Examiner Interview (Telephonic)
Jun 25, 2024
Response after Non-Final Action
Jul 01, 2024
Request for Continued Examination
Jul 07, 2024
Response after Non-Final Action
Oct 28, 2024
Non-Final Rejection — §103
Feb 03, 2025
Examiner Interview Summary
Feb 03, 2025
Applicant Interview (Telephonic)
Feb 04, 2025
Response Filed
Mar 17, 2025
Final Rejection — §103
May 20, 2025
Response after Non-Final Action
Jun 20, 2025
Request for Continued Examination
Jun 24, 2025
Response after Non-Final Action
Sep 29, 2025
Non-Final Rejection — §103
Dec 22, 2025
Applicant Interview (Telephonic)
Dec 22, 2025
Examiner Interview Summary
Jan 02, 2026
Notice of Allowance
Jan 02, 2026
Response after Non-Final Action
Feb 17, 2026
Response after Non-Final Action
Feb 19, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12443336
INTERACTIVE USER INTERFACE FOR DYNAMICALLY UPDATING DATA AND DATA ANALYSIS AND QUERY PROCESSING
2y 5m to grant Granted Oct 14, 2025
Patent 12282863
METHOD AND SYSTEM OF USER IDENTIFICATION BY A SEQUENCE OF OPENED USER INTERFACE WINDOWS
2y 5m to grant Granted Apr 22, 2025
Patent 12182378
METHODS AND SYSTEMS FOR OBJECT SELECTION
2y 5m to grant Granted Dec 31, 2024
Patent 12111956
Methods and Systems for Access Controlled Spaces for Data Analytics and Visualization
2y 5m to grant Granted Oct 08, 2024
Patent 12032809
Computer System and Method for Creating, Assigning, and Interacting with Action Items Related to a Collaborative Task
2y 5m to grant Granted Jul 09, 2024
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

8-9
Expected OA Rounds
31%
Grant Probability
46%
With Interview (+15.8%)
4y 1m
Median Time to Grant
High
PTA Risk
Based on 98 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month