Prosecution Insights
Last updated: April 19, 2026
Application No. 18/635,150

INFORMATION PROCESSING APPARATUS, CONTROL METHOD FOR INFORMATION PROCESSING APPARATUS, AND STORAGE MEDIUM

Non-Final OA §103§112§DP
Filed
Apr 15, 2024
Examiner
VU, THANH T
Art Unit
2179
Tech Center
2100 — Computer Architecture & Software
Assignee
Canon Kabushiki Kaisha
OA Round
1 (Non-Final)
74%
Grant Probability
Favorable
1-2
OA Rounds
3y 6m
To Grant
91%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
464 granted / 623 resolved
+19.5% vs TC avg
Strong +16% interview lift
Without
With
+16.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
19 currently pending
Career history
642
Total Applications
across all art units

Statute-Specific Performance

§101
7.2%
-32.8% vs TC avg
§103
47.1%
+7.1% vs TC avg
§102
17.6%
-22.4% vs TC avg
§112
16.1%
-23.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 623 resolved cases

Office Action

§103 §112 §DP
DETAILED ACTION Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 16-23 and 25-28 are rejected on the ground of nonstatutory obviousness-type double patenting as being unpatentable over claims 1, and 5-13 of U.S. Patent No. 11,983,385. Although the conflicting claims are not identical, they are not patentably distinct from each other because Claims 1, and 5-13 of U.S. Patent No. 11,983,385 contains every element of claims 16-23 and 25-28 of the instant application and thus anticipated the claims of the instant application. Claims of the instant application therefore are not patently distinct from the earlier patent claims and as such are unpatentable over obvious-type double patenting. A later patent/application claim is not patentably distinct from an earlier claim if the later claim is anticipated by the earlier claim. Claims 24 is rejected on the ground of nonstatutory obviousness-type double patenting as being unpatentable over claim 1 of U.S. Patent No. 11,983,385, Matsumoto et al. (“Matsumoto”, Pub. No. US 2012/0299862), and Hirai (Pub. No. US 2014/0168130). Although the conflicting claims are not identical, claim 1 of U.S. Patent No. 11,983,385 contains every element of claim 24 of the instant application except for the obvious variation of “wherein, in a case where the information processing apparatus is started, and in a case where the information processing apparatus is set to operate in the touch mode, the information processing apparatus is caused to operate in the touch mode, wherein in a case where the information processing apparatus is started and, in a case where the information processing apparatus is set to operate in the touchless mode, the information processing apparatus is caused to operate in the touchless mode; and wherein in a case where the information processing apparatus is started and, in a case where the information processing apparatus is not set to operate in either the touch mode or the touchless mode, a user operation performed on the information processing apparatus for selecting either the touch mode or the touchless mode is received.” However, the obvious variations are being taught by Matsumoto and Hirai. Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to add the teachings of Matsumoto and Hirai to enable reception of both touch and touchless operation, because doing so would provide an intuitive and intelligible touchless operation while ensuring the intelligibility of a touch display operation, thereby reducing the number of operation steps and the operation time Claim Objections Claims 16, objected to because of the following informalities: “one or more processors configured to execute the instructions stored in the one or more memories to;” should be “one or more processors configured to execute the instructions stored in the one or more memories to:” Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 17-18 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 17, Examiner notes that the language recited in this limitation, specifically the word "if," is interpreted as conditional/optional claim language. Language that suggests or makes optional but does not require steps to be performed or does not limit the claim to a particular structure or does not limit the scope of a claim or claim limitation. Therefore, the language following the "if" is optional and is not given any patentable weight. Claim 17 recites “a first that detects.” It is unclear what “a first” refers to. It appears the limitation should be “a first sensor.” Claim 18 recites the limitation "the contents". There is insufficient antecedent basis for this limitation in the claim. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 16, 19, 21-23, 25 and 27-28 is/are rejected under 35 U.S.C. 103 as being unpatentable over Matsumoto et al. (“Matsumoto”, Pub. No. US 2012/0299862) and Takatoh (Pub. No. US 2021/0112168). Per claim 16, Matsumoto teaches an information processing apparatus capable of receiving a touch operation and a touchless operation comprising: one or more memories that store instructions; and one or more processors configured to execute the instructions stored in the one or more memories to; receive a user operation performed on the information processing apparatus for selecting either a touch mode in which a touch operation is enabled and a touchless operation is disabled, (Fig. 1, switch button 131, [0010] which show a device can operate either in a first input mode using touchless input, or in a second input mode using touch input. The user can switch from the first to the second mode using a switch button. Fig. 11 and [0109]-[0130], which show the user can switch to the second input mode, i.e. touch mode, by pressing switch button 131. In the second mode only touch gestures are recognized (i.e. touch operation is enabled and touchless operation is not selected (disabled)) (see, fig. 11, S5-S6)) or a touchless mode in which a touch operation is disabled and a touchless operation is enabled (Fig. 1, switch button 131, [0010], which show a device can operate either in a first input mode using touchless input, or in a second input mode using touch input. The user can switch from the first to the second mode using a switch button. fig. 2 and [0044] which show in the first, i.e. touchless, input mode, hand gestures are recognized. Fig. 11 and [0109]-[0130] which show the device starts by default in the first input mode, i.e. touchless mode, in which only hand gestures are recognized (i.e. touchless operation is enabled and touch operation is not selected (disabled)) (Fig. 11, S7-S11)); operate the information processing apparatus in the touch mode in which the touch operation, which is a user operation performed on the information processing apparatus, is enabled based on a user operation performed on the information processing apparatus being the switch operation (Fig. 11 and [0109]-[0130], The user can switch to the second input mode, i.e. touch mode, by pressing switch button 131. In the second mode only touch gestures are recognized (fig. 11, S5-S6); and operate the information processing apparatus in the touchless mode in which the touchless operation, which is a user operation performed on the information processing apparatus, is enabled based on a user operation performed on the information processing apparatus being the touchless operation (fig. 2 and [0044] which show in the first, i.e. touchless, input mode, hand gestures are recognized. Fig. 11 and [0109]-[0130] which show the device starts by default in the first input mode, i.e. touchless mode, in which only hand gestures are recognized (Fig. 11, S7-S11). In addition, while the user makes only hand gestures, the device remains in touchless mode.) Matsumoto doesn’t specifically teach the touch operation is enable based on a user operation performed on the information processing apparatus being the touch operation. However, Takotoh teaches the touch operation is enable based on a user operation performed on the information processing apparatus being the touch operation (a multi-function printer has a touch screen displaying software keys, messages, and the like for receiving various settings, printing instruction, and the like from a user (FIG. 1 and [0025[, [0033]-[0034],[0058] and [0066]) and also accepts touchless gestures (FIG. 4-5 and [0052]). The user can operate the touch panel initially (FIG. 1 and [0049]-[0050]). Detection of gesture (gesture recognition) is started when a start condition is satisfied, and is terminated when a termination condition is satisfied. Before starting detection of gesture and after terminating detection of gesture (during suspension of gesture recognition), the gesture is not detected (FIG. 7 and [0060]-[0061], [0091]-[0092]). Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to add the teachings of Takatoh to operate the apparatus in a touch mode based on that a first user operation is a touch operation, because doing so would improve operability. Per claim 19, the modified Matsumoto teaches the information processing apparatus according to claim 16, but does not teach wherein the information processing apparatus is a printer. However, Takotoh further teaches wherein the information processing apparatus is a printer (a multi-function printer has a touch screen (FIG. 1 and [0025], [0033]-[0034], [0058], and [0066]) and also accepts touchless gestures (FIG. 4-5 and par. 52)). Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to add the teachings of Takatoh to use a printer, because doing so would improve operability. Per claim 21, the modified Matsumoto teaches the information processing apparatus according to claim 16, wherein the touchless operation is a hand gesture operation (Matsumoto, in the first, i.e. touchless, input mode, hand gestures are recognized (FIG. 2 and [0044])). Per claim 22, the modified Matsumoto teaches the information processing apparatus according to claim 16, wherein the touch operation is an operation that causes an indicator to touch an operation panel of the information processing apparatus (Matsumoto, the second input mode is for receiving, as an input, a touch operation made on the touch panel unit 110 (FIG. 1 and [0037]. To touch a display position of each of the buttons being displayed on the touch panel unit 110 with the user's finger or the like is represented as "to select a button" [0053]). Per claim 23, the modified Matsumoto teachers the information processing apparatus according to claim 16, wherein the touchless operation is an operation for operating the information processing apparatus without an indicator touching an operation panel of the information processing apparatus (Matsumoto, the first input mode is a mode for receiving, as an input, a touchless operation detected based on images photographed by the camera 121 ([0037]). Per claim 25, the modified Matsumoto teaches the information processing apparatus according to claim 16, wherein a user operation performed on the information processing apparatus is an operation performed on a content screen for selecting either the touch mode or the touchless mode, the content screen being displayed on a display unit of the information processing apparatus (Matsumoto, a display recipe screen G1a having one or more content on the display (Fig. 1 and [0041]-[0042]). Fig. 1, switch button 131, [0010] which show a device can operate either in a first input mode using touchless input, or in a second input mode using touch input. The user can switch from the first to the second mode using a switch button. Fig. 11 and [0109]-[0130], the user can switch to the second input mode, i.e. touch mode, by pressing switch button 131. In the second mode only touch gestures are recognized (see, fig. 11, S5-S6).) Fig. 2 and [0044] which show in the first, i.e. touchless, input mode, hand gestures are recognized. Fig. 11 and [0109]-[0130] which show the device starts by default in the first input mode, i.e. touchless mode, in which only hand gestures are recognized (Fig. 11, S7-S9). Takotoh, a multi-function printer has a touch screen displaying software keys, messages, and the like for receiving various settings, printing instruction, and the like from a user (FIG. 1 and [0025[, [0033]-[0034],[0058] and [0066]) and also accepts touchless gestures (FIG. 4-5 and [0052]). The user can operate the touch panel initially (FIG. 1 and [0049]-[0050]). Detection of gesture (gesture recognition) is started when a start condition is satisfied, and is terminated when a termination condition is satisfied. Before starting detection of gesture and after terminating detection of gesture (during suspension of gesture recognition), the gesture is not detected (FIG. 7 and [0060]-[0061], [0091]-[0092])). Claims 27 and 28 individually is rejected under the same rationale as claim 16. Claim(s) 17, 20, and 24 is/are rejected under 35 U.S.C. 103 as being unpatentable over Matsumoto et al. (“Matsumoto”, Pub. No. US 2012/0299862), Takatoh (Pub. No. US 2021/0112168), and Hirai (Pub. No. US 2014/0168130). Per claim 17, the modified Matsumoto teaches the information processing apparatus according to claim 16, a first sensor that detects the touch operation sensor and a second sensor that detects the touchless operation but does not teach (Matsumoto, [0037]…the first input mode is a mode for receiving, as an input, a touchless operation (described in detail later) detected based on images photographed by the camera 121 (i.e. a sensor). The second input mode is for receiving, as an input, a touch operation made on the touch panel unit 110 (i.e. a sensor)). The modified Matsumoto does not specifically teache if there has been no operation from the user for a predetermined time, both a first that detects the touch operation sensor and a second sensor that detects the touchless operation are enabled. Takatoh further teaches determining if there has been no operation from the user for a predetermined time (FIG. 11 and [0124]-[0129]). Hirai teaches wherein both the first and second sensors are enabled (i.e. the type of touch input indicates whether the desired input operation is touch input or voice command (FIG. 2 and [0075]-[0076]). The user can select a touch or touchless, i.e. voice, mode at any time by performing the appropriate input gesture to select either mode on any button. By enabling the user to use two types of operations properly for each button in this way, the input method determining unit can determine whether the user is trying to perform an input by performing either a touch operation or a voice operation on each button (FIG. 5 and [0074]). Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to add the teachings of Hirai to enable both sensors if there has been no operation from the user for a predetermined time, because doing so would provide an intuitive and intelligible touchless operation while ensuring the intelligibility of a touch display operation, thereby reducing the number of operation steps and the operation time. Per claim 20, the modified Matsumoto teaches the information processing apparatus according to claim 16, but does specifically teach wherein reception of both the touch operation and the touchless operation is enabled until the operation of the user is received. However, Hirai teaches wherein reception of both the touch operation and the touchless operation is enabled until the operation of the user is received (i.e. the type of touch input indicates whether the desired input operation is a touch input or voice command (FIG. 2 and [0075]-[0076]). The user can select a touch or touchless, i.e. voice, mode at any time by performing the appropriate input gesture to select either mode on any button. By enabling the user to use two types of touching operations properly for each button in this way, the input method determining unit can determine whether the user is trying to perform an input by performing either a touch operation or a voice operation on each button (FIG. 5 and [0074])). Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to add the teachings of Hirai to enable reception of both touch and touchless operation, because doing so would provide an intuitive and intelligible touchless operation while ensuring the intelligibility of a touch display operation, thereby reducing the number of operation steps and the operation time. Per claim 24, the modified Matsumoto teaches the information processing apparatus according to claim 16, wherein, in a case where the information processing apparatus is started, and in a case where the information processing apparatus is set to operate in the touch mode, the information processing apparatus is caused to operate in the touch mode (Matsumoto, (Fig. 11 and [0109]-[0130], The user can switch to the second input mode, i.e. touch mode, by pressing switch button 131 (i.e. starting the touch mode). In the second mode only touch gestures are recognized (fig. 11, S5-S6)), wherein in a case where the information processing apparatus is started and, in a case where the information processing apparatus is set to operate in the touchless mode, the information processing apparatus is caused to operate in the touchless mode; and wherein in a case where the information processing apparatus is started (Matsumoto, fig. 2 and [0044] which show in the first, i.e. touchless, input mode, hand gestures are recognized. Fig. 11 and [0109]-[0130] which show the device starts by default in the first input mode, i.e. touchless mode, in which only hand gestures are recognized (Fig. 11, S7-S9). The modified Matsumoto does not specifically teach in a case where the information processing apparatus is not set to operate in either the touch mode or the touchless mode, a user operation performed on the information processing apparatus for selecting either the touch mode or the touchless mode is received. However, Hirai teaches where the information processing apparatus is not set to operate in either the touch mode or the touchless mode, a user operation performed on the information processing apparatus for selecting either the touch mode or the touchless mode is received (i.e. the type of touch input indicates whether the desired input operation is a touch input or voice command (FIG. 2 and [0075]-[0076]). The user can select a touch or touchless, i.e. voice, mode at any time by performing the appropriate input gesture to select either mode on any button. By enabling the user to use two types of touching operations properly for each button in this way, the input method determining unit can determine whether the user is trying to perform an input by performing either a touch operation or a voice operation on each button (FIG. 5 and [0074])). Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to add the teachings of Hirai to enable reception of both touch and touchless operation, because doing so would provide an intuitive and intelligible touchless operation while ensuring the intelligibility of a touch display operation, thereby reducing the number of operation steps and the operation time Claim(s) 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Matsumoto et al. (“Matsumoto”, Pub. No. US 2012/0299862), Takatoh (Pub. No. US 2021/0112168), and Xiao et al. (“Xiao”, Pub. No. US 2013/0130669) Per claim 18, the modified Matsumoto teaches the information processing apparatus according to claim 1, but they don’t expressly teach wherein contents screen, a home screen for selecting an application is displayed based on the touch mode being determined. Xiao et al teaches wherein a home screen for selecting an application is displayed based on the touch mode being determined (i.e. multiple account "views" may be created on a mobile device that allows the user to use the mobile device in a work mode, while also allowing the user to use the mobile device in a personal mode ([0010]). Database 430 may also store information associated with configuring mobile device 120 based on the different modes of operation. For example, database 430 may store information identifying one home screen background for a personal mode and another home screen background for a business mode. Database 430 may also store information identifying icons or links to applications associated with each of the modes/views in which mobile device 120 may operate (FIG. 4 and [0039]). Provide a prompt via output device 350 (e.g., touchscreen LCD), inquiring whether the user would like to place mobile device 120 in the business mode or personal mode. Provide a link/selection for business view and a link/selection for personal view (FIG. 7 and [0056]-[0057]). Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to add the teachings of Xiao to display a home screen for selecting an application based on the touch mode being determined, because doing so would allow the user to use the same device in multiple operating modes. Allowable Subject Matter Claims 26 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The prior art of record either alone or in combination doesn’t teach the limitations of “wherein a content displayed on the content screen includes a touch content that receives only a touch operation and a touchless content that receives only the touchless operation, wherein the information processing apparatus is caused to operate in a touch mode, based on a user operation performed on the touch content being the touch operation, and wherein the information processing apparatus is caused to operate in a touchless mode, based on a user operation performed on the touchless content being the touchless operation” in combination with the other claimed features. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Kim et al. (Pub. No. US 2018/0136812) discloses touch and non-contact gesture based screen switching. Saito et al. (Pub. No. US 2017/0244943) disclose a projection control unit configured to control projection of an image on a projection plane by a projector; and a UI control unit configured to switch a mode of a user interface (UI) related to the projected image between two or more UI modes based on a positional relationship of the projector with respect to the projection plane. Inquiries Any inquiry concerning this communication or earlier communications from the examiner should be directed to THANH T VU whose telephone number is (571)272-4073. The examiner can normally be reached M-F: 7AM - 3:30PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Fred Ehichioya can be reached at (571) 272-4034. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /THANH T VU/ Primary Examiner, Art Unit 2179
Read full office action

Prosecution Timeline

Apr 15, 2024
Application Filed
Dec 16, 2025
Non-Final Rejection — §103, §112, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602152
SYSTEMS AND METHODS TO PROVIDE PERSONALIZED GRAPHICAL USER INTERFACES WITHIN A COLLABORATION ENVIRONMENT
2y 5m to grant Granted Apr 14, 2026
Patent 12591352
INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING METHOD, AND NON-TRANSITORY COMPUTER READABLE MEDIUM
2y 5m to grant Granted Mar 31, 2026
Patent 12579358
SUPPLEMENTAL CONTENT AND GENERATIVE LANGUAGE MODELS
2y 5m to grant Granted Mar 17, 2026
Patent 12572262
COMMUNICATION APPARATUS, IMAGE GENERATION SYSTEM, CONTROL METHOD OF COMMUNICATION APPARATUS, CONTROL METHOD OF IMAGE GENERATION SYSTEM, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM
2y 5m to grant Granted Mar 10, 2026
Patent 12572324
SYSTEMS AND METHODS FOR DISPLAYING SUBJECTS OF AN AUDIO PORTION OF CONTENT AND SEARCHING FOR CONTENT RELATED TO A SUBJECT OF THE AUDIO PORTION
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
74%
Grant Probability
91%
With Interview (+16.5%)
3y 6m
Median Time to Grant
Low
PTA Risk
Based on 623 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month