Prosecution Insights
Last updated: April 19, 2026
Application No. 18/707,185

INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM

Non-Final OA §103
Filed
May 03, 2024
Examiner
KIM, WILLIAM JW
Art Unit
2409
Tech Center
2400 — Computer Networks
Assignee
Sony Group Corporation
OA Round
3 (Non-Final)
79%
Grant Probability
Favorable
3-4
OA Rounds
2y 2m
To Grant
94%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
352 granted / 448 resolved
+20.6% vs TC avg
Strong +15% interview lift
Without
With
+15.1%
Interview Lift
resolved cases with interview
Fast prosecutor
2y 2m
Avg Prosecution
16 currently pending
Career history
464
Total Applications
across all art units

Statute-Specific Performance

§101
8.8%
-31.2% vs TC avg
§103
50.7%
+10.7% vs TC avg
§102
10.5%
-29.5% vs TC avg
§112
17.3%
-22.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 448 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 21 January 2026 has been entered. Response to Arguments Claims 1-5, 8, 12-13, and 15-18 have been amended. Claims 1-18 are presently pending. Applicant’s arguments with respect to claims 1, 17, and 18 have been considered but are moot in view of the new ground(s) of rejection. Although a new ground of rejection has been used to address additional limitations that have been added to Claims 1, 17, and 18 a response is considered necessary for several of applicant's arguments since references Tsurumi, Eronen, and Lemmey will continue to be used to meet several claimed limitations. Regarding Applicant’s arguments against the rejection of the claims under 35 USC 103 in view of Tsurumi, Eronen, and Lemmey (see Remarks, pgs. 12-19) the Examiner disagrees. Applicant argues that Tsurumi does not teach, suggest, or render obvious ‘the sound information includes first information to control sound image localization of a voice of a second user” (see Remarks, pgs. 13-15). Applicant further argues that Eronen also fails to teach, suggest, or render obvious said limitation (see Remarks, pgs. 16-17). Applicant further argues the Lemmey reference (see Remarks, pg. 18, with respect to the rejection of Claims 6-11) “merely describes that ‘methods and systems for control of an ensemble experience such as a sports game, a large scale event, or a video conference’”. The Examiner notes that the test for obviousness is not whether the features of a secondary reference may be bodily incorporated into the structure of the primary reference; nor is it that the claimed invention must be expressly suggested in any one or all of the references. Rather, the test is what the combined teachings of the references would have suggested to those of ordinary skill in the art. Furthermore, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). It is noted that the Final Rejection mailed out on 23 October 2025 (hereinafter the Final) addressed previous claim language where the contents of the claimed ‘sound control information’ were recited in the alternative, and the amended claim language where both first information to control voice of a second user AND second control information to control sound in the content data is newly addressed below. It is noted that Tsurumi is not relied upon to disclose that sound control information includes the recited first information. Rather, Tsurumi discloses methods and systems where both a user’s viewing state of content and data of the viewed content itself (respectively second and first time-series data) may be analyzed and adjust audio outputs accordingly (i.e., sound control information is output according to the first and second analysis results). Eronen is introduced to teach that audio source localization may be performed based on visual analysis of objects within a video (such as through the video analysis of Tsurumi above). Lemmey teaches that users may engage in audio/video communication during viewing of some underlying content (see Lemmey [Fig. 6] and [0053]). Lemmey [0019-20], [0048-53], and [0056-57] teach to apply different volume controls to base content and audio data from other participants (i.e., voice of second users) in accordance to analysis of the video (see Lemmey [Fig. 6]) or context of the situation (see Lemmey [0019]). As such, the Lemmey reference is analogously related to the technology of audio control of some AV content output (such as Tsurumi and Eronen), and adds functionality of receiving, controlling, and outputting voice information of second users to provide ensemble experiences and means for controlling attention of the content/video chatting while viewing some content as explained in Lemmey [ABST], [0011-12], and [0057]. It is further noted that Independent Claims 1, 17, and 18 fail to define how the first and second analysis results specifically affect the output sound control information other than a general association, and otherwise merely require that the sound control information somehow includes controls for both voice data of a second user and the sound data of the content. In response to applicant's argument that the examiner's conclusion of obviousness is based upon improper hindsight reasoning, it must be recognized that any judgment on obviousness is in a sense necessarily a reconstruction based upon hindsight reasoning. But so long as it takes into account only knowledge which was within the level of ordinary skill at the time the claimed invention was made, and does not include knowledge gleaned only from the applicant's disclosure, such a reconstruction is proper. See In re McLaughlin, 443 F.2d 1392, 170 USPQ 209 (CCPA 1971). As noted above, Lemmey is cited to provide rationale and motivation for combining the features of Lemmey with the teachings of Tsurumi and Eronen. With respect to Claim 6, Miyasato is also relied upon to provide a rationale for combining (see the corresponding rejection below). As such, the combined teachings of the art of record disclose, teach, and suggest the recited claim limitations of Claims 1, 17, and 18. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-5, 12-14, and 17-18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Tsurumi (US 2012/0224043 A1) (of record, hereinafter Tsurumi), further in view of Eronen et al. (US 2018/0295463 A1) (of record, hereinafter Eronen), and further in view of Lemmey et al. (US 2013/0021431 A1) (of record, hereinafter Lemmey). Regarding Claim 1, Tsurumi discloses an information processing apparatus [Figs. 1, 6] comprising a Central Processing Unit (CPU) [Figs. 1, 6] configured to: output sound control information [Figs. 1, 5; Audio output control unit 111] based on a basis of first analysis result and a second analysis result, wherein the first analysis result corresponds to first time-series data included in content data, the second analysis correspond to second time-series data, the second time-series data indicates a situation of a first user, [Figs. 1-5; 0027-30, 0040-43, 0048-49: image and/or sound data of a user over time may be analyzed to determine a viewing state of the user (i.e., second time-series data); 0034, 0054-60: content data may be analyzed to detect and determine scenes and characteristics of scenes in the content (i.e., first time-series data); 0031-32, 0065, 0068, 0074, 0077, 0080: where depending on viewing state of user and/or scene importance, audio output control unit 111 may control volume and/or quality of content audio] and the sound control information includes second information to control a sound included in the content data. [Fig. 5; 0031-32, 0065, 0068, 0074, 0077, 0080: where depending on viewing stat of user and/or scene importance, audio output control unit 111 may control volume and/or quality of content audio] Tsurumi fails to explicitly disclose wherein the sound control information includes second information to control sound image localization of a sound included in the content data. Eronen, in analogous art, teaches wherein the sound control information includes second information to control sound image localization of a sound included in the content data. [Figs. 1-7; 0102, 0121-127: audio source localization may be determined through visual analysis of video for tracking sound sources in the video (such as the first and second analysis results of Tsurumi above)] It would have been obvious to one of ordinary skill in the art prior to the filing date of the invention to modify the apparatus of Tsurumi with the teachings of Eronen to analyze video content to determine sound image localization in order to automate the process of spatial audio capture, mixing and sound track creation. [Eronen – 0002-4, 0102] Tsurumi and Eronen fail to explicitly disclose the sound control information includes first information to control sound image localization of a voice of a second user; and to output the voice of the second user to a user terminal associated with the first user. Lemmey, in analogous art, teaches the sound control information includes first information to control sound image localization of a voice of a second user; and to output the voice of the second user to a user terminal associated with the first user. [Figs. 2-6; 0011, 0019-20, 0048-53: group ‘ensemble’ experience where video and audio from participants may be provided and output alongside presentation of some primary content, whereby sound of respective participants and base content may vary according to a variety of rules; 0056-57: where participants may be paced as objects within the video and be visually and audially adjusted to appear and sound further away/quieter etc.] It would have been obvious to one of ordinary skill in the art prior to the filing date of the invention to modify the apparatus of Tsurumi and Eronen with the teachings of Lemmey to output voice of a second user to a user terminal of a first user in order to provide means for controlling attention management for ensemble experiences with video chatting elements and localize the voice data accordingly to mimic real world experiences. [Lemmey – ABST; 0011-12, 0057] Regarding Claim 2, Tsurumi, Eronen, and Lemmey disclose all of the limitations of Claim 1, which are analyzed as previously discussed with respect to that claim. Furthermore, Tsurumi and Lemmey disclose wherein the CPU is further configured to [Tsurumi – Figs. 1, 6; 0087, 0090-91: connection ports 923/communication device 925] transmit one of the content data or the voice of the second user to the user terminal; transmit and the sound control information to the user terminal. [Tsurumi – Figs. 1, 5; 0031-32, 0065, 0068, 0074, 0077, 0080: where depending on viewing stat of user and/or scene importance, audio output control unit 111 may control volume and/or quality of content audio; 0025: apparatus 100 provides content to display device 10, where content includes video data associated with content and audio data according to the output control unit; Lemmey – Figs. 2-6; 0011, 0019-20, 0048-53] Regarding Claim 3, Tsurumi, Eronen, and Lemmey disclose all of the limitations of Claim 1, which are analyzed as previously discussed with respect to that claim. Furthermore, Tsurumi and Lemmey disclose wherein the CPU is further configured to [Tsurumi – Figs. 1, 6; 0087, 0090-91: connection ports 923/communication device 925] apply the sound control information to one of the sound included in the content data or the voice of the second user; output distribution data based on the application of the sound control information to the one of the sound included in the content data or the voice of the second user; and transmit the distribution data to the user terminal. [Tsurumi – Figs. 1, 5; 0031-32, 0065, 0068, 0074, 0077, 0080: where depending on viewing stat of user and/or scene importance, audio output control unit 111 may control volume and/or quality of content audio; 0025: apparatus 100 provides content to display device 10, where content includes video data associated with content and audio data according to the output control unit; Lemmey – Figs. 2-6; 0011, 0019-20, 0048-53] Regarding Claim 4, Tsurumi, Eronen, and Lemmey disclose all of the limitations of Claim 1, which are analyzed as previously discussed with respect to that claim. Furthermore, Tsurumi and Lemmey disclose wherein the sound control information further includes one of third information to control a volume of the voice of the second user output to the user terminal or fourth information to control a volume of the sound included in the content data. [Tsurumi – Figs. 1, 5; 0031-32, 0065, 0068, 0074, 0077, 0080: where depending on viewing stat of user and/or scene importance, audio output control unit 111 may control volume and/or quality of content audio; Lemmey – Figs. 2-6; 0011, 0019-20, 0048-53] Regarding Claim 5, Tsurumi, Eronen, and Lemmey disclose all of the limitations of Claim 1, which are analyzed as previously discussed with respect to that claim. Furthermore, Tsurumi discloses wherein the sound control information further includes one of fifth information to control sound quality of the voice of the second user output to the user terminal or sixth information to control a volume of the sound included in the content data. [Tsurumi – Figs. 1, 5; 0031-32, 0065, 0068, 0074, 0077, 0080: where depending on viewing stat of user and/or scene importance, audio output control unit 111 may control volume and/or quality of content audio] Regarding Claim 12, Tsurumi, Eronen, and Lemmey disclose all of the limitations of Claim 2, which are analyzed as previously discussed with respect to that claim. Furthermore, Tsurumi wherein the CPU is further configured to: analyze the second time-series data; detect a viewing state of the first user based on the analysis of the second time-series data, wherein the viewing state includes one of information indicating occurrence of a conversation between the first user and the second user, information indicating occurrence of a reaction of the first user is making a reaction, or information indicating whether the first user watches a screen of the user terminal; and output the sound control information based on the detected viewing state. [Tsurumi – Figs. 1-5; 0027-30, 0040-43, 0048-49: image and/or sound data of a user over time may be analyzed to determine a viewing state of the user (i.e., second time-series data); 0064-68: where system may determine that the user’s eyes are close (i.e., not watching the screen) and may gradually lower volume of the audio as a result of that determination] Regarding Claim 13, Tsurumi, Eronen, and Lemmey disclose all of the limitations of Claim 12, which are analyzed as previously discussed with respect to that claim. As Claim 13 is directed toward an alternative limitation (viewing state including whether or not the user is having a conversation with another user), and another alternative (information indicating whether or not the user is watching a screen) was chosen for examination and application of art in Claim 12, the further limitations of the unselected alternative limitation will not be addressed and Claim 13 are rejected with Claim 12. Regarding Claim 14, Tsurumi, Eronen, and Lemmey disclose all of the limitations of Claim 12, which are analyzed as previously discussed with respect to that claim. Furthermore, Tsurumi, Eronen, and Lemmey disclose wherein in a case where the detected viewing state indicates that the first user does not watch the screen of the user terminal, the CPU is further configured to: generate the second information to control the sound image localization of the sound included in the content data; and control the sound image localization of the sound included in the content data based on the second information, wherein based on the controlled sound image localization of the sound included in the content data the first user feels the sound included in the content data is heard from a farther place than a sound heard immediately before a time point at which the detected viewing state indicates that the first user does not watch the screen, and the sound image localization of the sound included in the content data is controlled until the detected viewing state indicates that the first user watches the screen. [Tsurumi – Figs. 1-5; 0027-30, 0040-43, 0048-49: image and/or sound data of a user over time may be analyzed to determine a viewing state of the user (i.e., second time-series data); 0064-68: where system may determine that the user’s eyes are close (i.e., not watching the screen) and may gradually lower volume of the audio as a result of that determination (where it would be understood that a lower volume would make sounds feel further away); Eronen – Figs. 1-7; 0102, 0121-127; Lemmey – Figs. 2-6; 0011, 0019-20, 0048-53] Regarding Claim 17, Claim 17 recites a method the performs the functions of the apparatus of Claim 1. As such, Claim 17 is analyzed and rejected similarly as Claim 1, mutatis mutandis. Regarding Claim 18, Claim 18 recites a CRM the performs the functions of the apparatus of Claim 1. As such, Claim 18 is analyzed and rejected similarly as Claim 1, mutatis mutandis. Claim(s) 6-11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Tsurumi, Eronen, and Lemmey as applied to claim 2 above, and further in view of Miyasato et al. (US 2004/0246259 A1) (of record, hereinafter Miyasato). Regarding Claim 6, Tsurumi, Eronen, and Lemmey disclose all of the limitations of Claim 1, which are analyzed as previously discussed with respect to that claim. Furthermore, Tsurumi discloses wherein the CPU is further configured to: analyze the first time-series data; and detects scenes of a content based on the analysis of the first time-series data. [Tsurumi – 0034, 0054-60: content data may be analyzed to detect and determine scenes and characteristics of scenes in the content (i.e., first time-series data)] Tsurumi, Eronen, and Lemmey fail to explicitly disclose wherein the CPU is further configured to: detect a progress status of a content . (Emphasis on the element of the limitation not explicitly disclosed by Tsurumi, Eronen, and Lemmey). Miyasato, in analogous art, teaches wherein the CPU is further configured to: detect a progress status of a content . [Figs. 1, 5, 7; 0028-30, 0039-40, 0044-45: system may analyze video data to detect if a current section is a music/signing section of the video] It would have been obvious to one of ordinary skill in the art prior to the filing date of the invention to modify the apparatus of Tsurumi, Eronen, and Lemmey with the teachings of Miyasato to specify an analysis unit to detect a progress status of a content in order to allow for users to automatically determine various positions and structure of segments of a musical program. [Miyasato – 0004-7] Regarding Claim 7, Tsurumi, Eronen, Lemmey, and Miyasato disclose all of the limitations of Claim 6, which are analyzed as previously discussed with respect to that claim. Furthermore, Tsurumi and Miyasato further disclose wherein the CPU is further configured to detect, as the progress status, one of during performance, during a performer's utterance, before start, after end, during an intermission, or during a break. [Tsurumi – 0034, 0054-60: content data may be analyzed to detect and determine scenes and characteristics of scenes in the; Miyasato – Figs. 1, 5, 7; 0028-30, 0039-40, 0044-45: system may analyze video data to detect if a current section is a music/signing section of the video] Regarding Claim 8, Tsurumi, Eronen, Lemmey, and Miyasato disclose all of the limitations of Claim 6, which are analyzed as previously discussed with respect to that claim. Furthermore, Miyasato further discloses wherein the CPU is further configured to recognize music played in the content based on the detected progress status as during performance. [Miyasato – Figs. 1, 5, 7; 0028-30, 0039-40, 0044-45: system may analyze video data to detect if a current section is a music/signing section of the video] Regarding Claim 9, Tsurumi, Eronen, Lemmey, and Miyasato disclose all of the limitations of Claim 6, which are analyzed as previously discussed with respect to that claim. Furthermore, Miyasato further discloses wherein the CPU is further configured to analyze the first time-series data based on auxiliary information to improve accuracy of analysis, and the auxiliary information includes information indicating a progress schedule of the content, information indicating a song order, or information regarding a production schedule. [Miyasato – 0031-32, 0043-45: chapter menu information may be provided with the video signal for helping to select detect time periods over which specific songs occur] Regarding Claim 10, Tsurumi, Eronen, Lemmey, and Miyasato disclose all of the limitations of Claim 6, which are analyzed as previously discussed with respect to that claim. Furthermore, Miyasato further discloses wherein the CPU is further configured to detect a tune of music played in the content. [Miyasato – Fig. 7; 0031, 0039-45: music/singing and title of song may be detected and identified] Regarding Claim 11, Tsurumi, Eronen, Lemmey, and Miyasato disclose all of the limitations of Claim 6, which are analyzed as previously discussed with respect to that claim. Furthermore, Tsurumi and Eronen disclose wherein the first time-series data includes time-series data of a first video of the content, and the CPU is further configured to determine information of sound image localization corresponding to the time-series data of the first video of the content at a certain point of time, wherein the information of the sound image localization corresponding to the time-series date of the first video of the content is determined based on a model information, the model information is obtained based on a learning operation, and the learning operation is based on a second video of a state where one or more pieces of music are played and information of sound image localization of a sound corresponding to the second video. [Tsurumi – 0034, 0054-60: content data may be analyzed to detect and determine scenes and characteristics of scenes in the content (i.e., first time-series data); Eronen – Figs. 1-7; 0102, 0121-127: audio source localization may be determined through visual analysis of video for tracking sound sources in the video; 0232, 0243: neural network trained classifiers may be utilized for audio and visual space classifiers (where it would be implicitly understood that neural networks – NN – are trained with like data of the information to be classified/detected by the NN)] Claim(s) 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Tsurumi, Eronen, and Lemmey as applied to claim 12 above, and further in view of Liu et al. (US 20170257669 A1) (of record, hereinafter Liu). Regarding Claim 15, Tsurumi, Eronen, and Lemmey disclose all of the limitations of Claim 12, which are analyzed as previously discussed with respect to that claim. Furthermore, Tsurumi discloses wherein the second time-series data includes one of a voice of the first user, a third video of the first user, or information indicating an operation status of the user terminal of the first user. [Figs. 1-5; 0027-30, 0040-43, 0048-49: image and/or sound data of a user over time may be analyzed to determine a viewing state of the user (i.e., second time-series data)] Tsurumi, Eronen, and Lemmey fail to explicitly disclose where the CPU is further configured to detect a degree of excitement of the first user based on of any one or more of the voice of the first user, the third video of the first user, or the information indicating the operation status. Liu, in analogous art, teaches where the CPU is further configured to detect a degree of excitement of the first user based on of any one or more of the voice of the first user, the third video of the first user, or the information indicating the operation status. [0020-23: user engagement detector may use cameras and/or microphones (such as the image and sound data gathered by Tsurumi) to detect a level of excitement/engagement of a user by detecting the user’s facial expressions or based on analyzing their voice, and have varying different threshold for different emotions] It would have been obvious to one of ordinary skill in the art prior to the filing date of the invention to modify the apparatus of Tsurumi, Eronen, and Lemmey with the teachings of Liu to detect a degree of excitement of the user based on voice and video of the user as it would be readily understood that use of such sensor information may be utilized to determine how engaged a user is with displayed content based on their facial expressions or voice level/tone. [Liu – 0021] Allowable Subject Matter Claim 16 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to WILLIAM J KIM whose telephone number is (571)272-2767. The examiner can normally be reached 9:30am - 5:30pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Hadi Armouche can be reached at (571) 270-3618. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /WILLIAM J KIM/Primary Examiner, Art Unit 2409
Read full office action

Prosecution Timeline

May 03, 2024
Application Filed
May 03, 2024
Response after Non-Final Action
Jul 07, 2025
Non-Final Rejection — §103
Oct 08, 2025
Response Filed
Oct 20, 2025
Final Rejection — §103
Dec 22, 2025
Response after Non-Final Action
Jan 21, 2026
Request for Continued Examination
Jan 28, 2026
Response after Non-Final Action
Mar 10, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598351
METHODS, SYSTEMS, AND APPARATUSES FOR SCALABLE CONTENT DATA UPDATING
2y 5m to grant Granted Apr 07, 2026
Patent 12594887
TECHNIQUES FOR DISPLAYING CONTENT WITH A LIVE VIDEO FEED
2y 5m to grant Granted Apr 07, 2026
Patent 12587701
METHODS AND SYSTEMS FOR SYNCHRONIZING PLAYBACK OF MEDIA CONTENT ITEMS
2y 5m to grant Granted Mar 24, 2026
Patent 12574587
METHODS AND SYSTEMS FOR GROUP WATCHING
2y 5m to grant Granted Mar 10, 2026
Patent 12563251
CONTENT DELIVERY
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
79%
Grant Probability
94%
With Interview (+15.1%)
2y 2m
Median Time to Grant
High
PTA Risk
Based on 448 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month