Prosecution Insights
Last updated: April 19, 2026
Application No. 18/370,762

VALIDATING BITSTREAM COMPLIANCE AT RUNTIME FOR MULTIMEDIA STREAMING SYSTEMS AND APPLICATIONS

Final Rejection §103
Filed
Sep 20, 2023
Examiner
VO, TUNG T
Art Unit
2425
Tech Center
2400 — Computer Networks
Assignee
Nvidia Corporation
OA Round
4 (Final)
71%
Grant Probability
Favorable
5-6
OA Rounds
3y 2m
To Grant
86%
With Interview

Examiner Intelligence

Grants 71% — above average
71%
Career Allow Rate
639 granted / 901 resolved
+12.9% vs TC avg
Strong +16% interview lift
Without
With
+15.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
20 currently pending
Career history
921
Total Applications
across all art units

Statute-Specific Performance

§101
5.4%
-34.6% vs TC avg
§103
47.3%
+7.3% vs TC avg
§102
28.0%
-12.0% vs TC avg
§112
3.4%
-36.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 901 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s arguments, see the remarks filed 01/06/2025, with respect to the amended claim(s) 1, 9, and 15 have been fully considered and moot in view of new grounds of rejection by relying on the teachings of Novotny et al. (US 20050216815 A1). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-3, 5-13, and 15-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Novotny et al. (US 20050216815 A1). Note: the specification of the patent application publication (US 20250097471 A1) discloses “a reference checksum”, “a checksum value”, and “a reference checksum value”; and “The reconstructed frame output by the deblock filter 226 can be received by the checksum block 230, which computes a reference checksum based on the reconstructed frame. “… “To compute the checksum, the checksum block 230 can use a checksum calculation algorithm (e.g., CRC, Adler-32, a hash function (e.g., MD5, SHA-1, etc.), and the like) to process pixel values or data within the frame to generate the reference checksum value” in paragraph [0046]. There is no disclosure of “a single reference checksum”… to generate “the single reference checksum value” in the specification. So “a single reference checksum” is treated as “a reference checksum” and “the single reference checksum value” is treated as “the reference checksum value”. Regarding claim 1, Novotny teaches a method comprising: encoding, by a graphics processing unit (GPU) (100 and 122 of fig. 1, [0017] an encoder system or circuit), a frame of a video (106 ORIG of fig. 1, 140 of fig. 3; [0026] a unit of information is a picture, frame, field, sub-picture, block, macroblock or other spatial and/or temporal areas, [0038] and [0041] the process, 140 of fig. 2, may be repeated for each groups of pictures, picture, frame or field in the signal ORIG); determining, by the GPU (128 of fig. 1, for determining the reference checksum), a single reference checksum the encoded frame (122 and COMPT of fig.1, for the encoded frame; [0026] the checksum calculation circuit 128 calculates checksum values for units of information within the signal RECON, each checksum value may be presented in the signal CS1T, one value for each unit of information present within the signal RECON simultaneously, and a unit of information may be a picture, frame, field, sub-picture, block, macroblock or other spatial and/or temporal areas; [0049] Each checksum value may be presented in the signal CS2, 206 of fig. 3 of the decoder; [0061-0065] generation of the checksum values may be accomplished by a variety of methods) by processing pixel values of a reconstructed frame (128 of fig.1 for the processing pixel values of the reconstructed frame from the reconstructed video 126 of fig.1; [0061]-[0065] implementation of the example checksum generation method may be provided in both the checksum calculation circuit 128 operating on the reconstructed signal RECON and the checksum calculation circuit 206 operating on the decompressed signal DECOMP, the example checksum generation method using a unit of information (e.g., predetermined number of consecutive data samples) being processed; wherein the data samples of the unit information would obviously be treated as pixel values of the reconstructed or decompressed frame, RECON of fig. 1) to generate the single reference checksum value (CS1T of fig.1, the single reference checksum value, [0026] the checksum calculation circuit 128 calculates checksum values for units of information within the signal RECON, each checksum value may be presented in the signal CS1T, one value for each unit of information present within the signal RECON simultaneously, and a unit of information may be a picture, frame, field, sub-picture, block, macroblock or other spatial and/or temporal areas; [0061-0065] implementation of the example checksum generation method may be provided in both the checksum calculation circuit 128 operating on the reconstructed signal RECON and the checksum calculation circuit 206 operating on the decompressed signal DECOMP) encompassing a plurality of color space components of the frame ([0026] each calculated checksum value is for each picture, frame, sub-picture, block, or macroblock; wherein the picture, frame, sub-picture, block, or macroblock of a digital video has a plurality of color space components as disclosed in paragraph [0018]); and adding, by the GPU, the reference checksum to supplemental metadata associated with the encoded frame (124 of fig. 1, [0024] to generate the signal OUT to include the compressed information in the signal COMPT, error detection information in the signal CS1T and quality information in the signal QUAL1T, [0026] Each checksum value may be presented in the signal CS1T); and transmitting, to a decoding device (fig. 3, a decoder system), the encoded frame and the supplemental metadata (OUT of fig. 1, IN of fig. 3; [0041] The user data insertion circuit 124 may generate the signal OUT from the signals COMPT, CS1T and QUAL1T (e.g., step 160). The signal OUT may be transmitted/written to the transmission/storage medium 102 (e.g., step 162).) to cause a verification of the integrity of the frame by processing the received data using the decoding device ([0036] a number of different methods are generally available for calculating the signal CS1T as part of a reconstruction process verification; [0044] the signal INFOR may transfer or present one or more reconstruction verification comparison results to the receiving user 184; [0065] The decoder system 180 may perform the same checksum generation operation as the encoder system 100 to verify the reliability of the data received in the signal IN from the medium 102; [0068] both the video quality and the checksum (e.g., video reconstruction verification) data may be stored in the same payload message; and [0075] The checksum data for the reconstructed video data that is embedded in the encoded bitstream generally allows the decoder system 180 to verify a correctness of the video reconstruction and report any errors to the receiving user 184, and the checksum data may allow an automated verification of the entire encoding 140-medium 102-decoding 240 processes.); wherein: no read from memory operation or write to memory operation is performed for the encoded frame data (COMPT of fig. 1) and reference checksum (CS1T of fig. 1) using the GPU (100 of fig.1) throughout the encoding (122 of fig.1, 144 and 146 of fig. 2 for the encoding), checksum determination (128 of fig. 1, 154-158 of fig. 2 for the checksum determination), and metadata addition operations (124 of fig.1, and 160 of fig. 3 for the metadata addition operations); and verification of the integrity of the frame is performed without a read from memory operation or a write to memory operation ([0044] the signal INFOR may transfer or present one or more reconstruction verification comparison results to the receiving user 184; [0057] verification of the comparison result; verification of the integrity of the frame without a read from or a write to memory operation in paragraphs: [0036] A number of different methods are generally available for calculating the signal CS1T as part of a reconstruction process verification, as noted above; [0065] The checksum values generated in the encoder system 100 may be attached to the transmitted data in the signal OUT to the medium 102. The decoder system 180 may perform the same checksum generation operation as the encoder system 100 to verify the reliability of the data received in the signal IN from the medium 102; [0075] The checksum data for the reconstructed video data that is embedded in the encoded bitstream generally allows the decoder system 180 to verify a correctness of the video reconstruction and report any errors to the receiving user 184, and the checksum data may allow an automated verification of the entire encoding 140, transmission medium 102, and decoding 240 processes). Regarding claim 2, Novotny teaches the method of claim 1, wherein determining the reference checksum of the frame comprises: decoding the encoded frame to obtain a decoded frame (RECON of fig. 1, [0026] the decoded/decompressed data may also be used to generate the signal RECON); and computing the reference checksum of the decoded frame (202, 206, and 208 of fig. 3, [0049]). Regarding claim 3, Novotny further teaches the method of claim 1, wherein the reference checksum of the frame is determined using a cyclic redundancy check (CRC) algorithm ([0055] a CRC method or a parity method may be implemented in the checksum calculation circuits 128 and 206). Regarding claim 5, Novotny further teaches the method of claim 1, further comprising: receiving, at the recipient, the encoded frame and the supplemental metadata comprising the reference checksum (IN 183 of fig. 3); decoding the encoded frame (202 of fig. 3); computing a checksum of the decoded frame (206 of fig. 3); comparing the computed checksum with the reference checksum included in the supplemental metadata (208 of fig. 3); and verifying an integrity of the decoded frame based on a result of the comparing ([0057], [0065], and [0075]). Regarding claim 6, Novotny further teaches the method of claim 5, further comprising: determining that the computed checksum is not equivalent to the reference checksum; and responsive to determining the computed checksum is not equivalent to the reference checksum, ceasing to decode subsequent encoded frames of the video (206 of fig. 3, 248, NO, 250, 254 of fig. 3, [0049] and [0060]). Regarding claim 7, Novotny further teaches the method of claim 5, further comprising: determining that the computed checksum is equivalent to the reference checksum; and responsive to determining the computed checksum is equivalent to the reference checksum, continuing to decode subsequent encoded frames of the video (240, 250, and YES of fig. 4, [0060]). Regarding claim 8, Novotny further teaches the method of claim 5, further comprising: decoding the supplemental metadata to obtain decoded supplemental metadata (fig. 5); and extracting the reference checksum from the decoded supplemental metadata (fig. 5). Regarding claim 9, Novotny teaches a method comprising: receiving, by a graphics processing unit (GPU) (a decoder system of figure 3 and its process of figure 4), an encoded bitstream comprising a plurality of encoded frames of a video and associated supplemental metadata (IN 186 of fig. 3, [0043]); decoding, by the GPU, a frame of the encoded bitstream to obtain a decoded frame (202 of fig. 3, [0045] and [0047]); computing, by the GPU, a checksum of the decoded frame ([0026] the checksum calculation circuit 128 calculates checksum values for units of information within the signal RECON, each checksum value may be presented in the signal CS1T, one value for each unit of information present within the signal RECON simultaneously, and a unit of information may be a picture, frame, field, sub-picture, block, macroblock or other spatial and/or temporal areas and [0049] The circuit 208 may be referred to as a checksum calculation circuit. Implementation of the checksum calculation circuit 208 may be optional. The checksum calculation circuit 208 may be operational to calculate checksum values for units of information within the signal DECOMP. Each checksum value may be presented in the signal CS2. Presentation of checksum data may include multiple checksum values substantially simultaneously, one value for each unit of information present within the signal DECOMP simultaneously. Presentation of the checksum data may also be sequential as new units of information are received via the signal DECOMP. The decoder checksum calculation process implemented by the checksum calculation circuit 206 should match the encoder checksum calculation process implemented by the checksum calculation circuit 128; [0061-0065] generation of the checksum values may be accomplished by a variety of methods; the decoded frame) encompassing a plurality of color space components of the frame ([0026] each calculated checksum value is for each picture, frame, sub-picture, block, or macroblock; wherein the picture, frame, sub-picture, block, or macroblock of a digital video has a plurality of color space components as shown in paragraph [0018]); comparing, by the GPU, the computed checksum with a single reference checksum included in the supplemental metadata associated with the decoded frame (208 of fig. 3, comparing a single reference checksum, CS1R of fig. 3, to the computed checksum, CS2 of fig. 3, [0060] The compare circuit 208 may compare the checksum values in the signal CS2 to the checksum values received in the signal CS1R to determine a state (e.g., match or non-match) for the signal RESULTS (e.g., step 254). If multiple results exist for a single picture, the compare circuit 208 may further combine the multiple results into a single result (e.g., step 256)) without performing a read from memory operation or a write to memory operation for the comparing (208 of fig. 3 and 254 of fig. 4, there is no read from or write to memory operation in the compare circuit 208 of figure 3 and the step 254 of figure 4); and verifying an integrity of the decoded frame based on a result of the comparing (([0036] a number of different methods are generally available for calculating the signal CS1T as part of a reconstruction process verification; [0044] the signal INFOR may transfer or present one or more reconstruction verification comparison results to the receiving user 184; [0065] The decoder system 180 may perform the same checksum generation operation as the encoder system 100 to verify the reliability of the data received in the signal IN from the medium 102; [0068] both the video quality and the checksum (e.g., video reconstruction verification) data may be stored in the same payload message; and [0075] The checksum data for the reconstructed video data that is embedded in the encoded bitstream generally allows the decoder system 180 to verify a correctness of the video reconstruction and report any errors to the receiving user 184, and the checksum data may allow an automated verification of the entire encoding 140-medium 102-decoding 240 processes); wherein no read from memory operation or write to memory operation is performed for the encoded frame data and reference checksum using the GPU throughout the encoding (encoder system of fig. 1, 144 and 146 of fig. 2), checksum determination (128 of fig. 1, 154-158 of fig. 2, 206 and 208 of fig. 3), and metadata addition operations (124 of fig.1, and 160 of fig. 2). Regarding claims 10-12, see analysis in claims 6-8. Regarding claim 13, see analysis in claim 3. Regarding 15, Novotny teaches a system, comprising: a graphics processing unit (GPU) (fig. 1) comprising a first logic ([0078]) to: encode a frame of a video (106, ORIG, and 122 of fig.1, the process 140 of fig. 3; [0026] a unit of information is a picture, frame, field, sub-picture, block, macroblock or other spatial and/or temporal areas, [0038] and [0041] the process, 140 of fig. 2, may be repeated for each groups of pictures, picture, frame or field in the signal ORIG)); determine a single reference checksum of the frame (122 and COMPT of fig. 1, for the frame; 128 of fig. 1, for determining the reference checksum; [0026] the checksum calculation circuit 128 calculates checksum values for units of information within the signal RECON, each checksum value may be presented in the signal CS1T, one value for each unit of information present within the signal RECON simultaneously, and a unit of information may be a picture, frame, field, sub-picture, block, macroblock or other spatial and/or temporal areas; [0049] Each checksum value may be presented in the signal CS2, 206 of fig. 3 of the decoder; [0061-0065] generation of the checksum values may be accomplished by a variety of methods) by processing pixel values of a reconstructed frame ([0026] the unit information is a picture, frame, field, sub-picture, block, macroblock; 126 of fig. 1, for generating a reconstructed frame as RECON; 128 of fig.1 for the processing pixel values of the reconstructed frame from the reconstructed video 126 of fig.1; [0061]-[0065] implementation of the example checksum generation method may be provided in both the checksum calculation circuit 128 operating on the reconstructed signal RECON and the checksum calculation circuit 206 operating on the decompressed signal DECOMP, the example checksum generation method using a unit of information (e.g., predetermined number of consecutive data samples) being processed; wherein the data samples of the unit information would obviously be treated as pixel values of the reconstructed or decompressed frame) to generate the single reference checksum value (CS1T of fig.1, the single reference checksum value, [0026] the checksum calculation circuit 128 calculates checksum values for units of information within the signal RECON, each checksum value may be presented in the signal CS1T, one value for each unit of information present within the signal RECON simultaneously, and a unit of information may be a picture, frame, field, sub-picture, block, macroblock or other spatial and/or temporal areas; [0061-0065] implementation of the example checksum generation method may be provided in both the checksum calculation circuit 128 operating on the reconstructed signal RECON and the checksum calculation circuit 206 operating on the decompressed signal DECOMP) encompassing a plurality of color space components of the frame ([0026] each calculated checksum value is for each picture, frame, sub-picture, block, or macroblock; wherein the picture, frame, sub-picture, block, or macroblock of a digital video has a plurality of color space components as disclosed in paragraph [0018]); add the reference checksum to supplemental metadata associated with the encoded frame of the video (124 of fig. 1, [0024] to generate the signal OUT to include the compressed information in the signal COMPT, error detection information in the signal CS1T and quality information in the signal QUAL1T, [0026] Each checksum value may be presented in the signal CS1T); and transmit the encoded frame and the supplemental metadata to a recipient (a recipient as a decoder system of fig. 3, (OUT of fig. 1, IN of fig. 3; [0041] The user data insertion circuit 124 may generate the signal OUT from the signals COMPT, CS1T and QUAL1T (e.g., step 160). The signal OUT may be transmitted/written to the transmission/storage medium 102 (e.g., step 162).)), enabling the recipient to verify integrity of the frame by processing the received data within the recipient's processing resources ([0057] The signal RESULT may be referred to as a result signal. Implementation of the result signal may be optional. The result signal RESULT may carry a match indication when the checksum in the signal CS1R matches the checksum in the signal CS2. The result signal RESULT may carry a non-match indication when the checksum in the signal CS1R is different from the checksum in the signal CS2; [0065] the checksum values generated in the encoder system 100 may be attached to the transmitted data in the signal OUT to the medium 102. The decoder system 180 may perform the same checksum generation operation as the encoder system 100 to verify the reliability of the data received in the signal IN from the medium 102; [0068] Both the video quality and the checksum (e.g., video reconstruction verification) data may be stored in the same payload message; and [0075] The checksum data for the reconstructed video data that is embedded in the encoded bitstream generally allows the decoder system 180 to verify a correctness of the video reconstruction and report any errors to the receiving user 184, and the checksum data may allow an automated verification of the entire encoding 140-medium 102-decoding 240 processes), such that verification of the integrity of the frame is performed without a read from memory operation or a write to memory operation ([0065] The checksum values generated in the encoder system 100 may be attached to the transmitted data in the signal OUT to the medium 102. The decoder system 180 may perform the same checksum generation operation as the encoder system 100 to verify the reliability of the data received in the signal IN from the medium 102; [0068] Both the video quality and the checksum (e.g., video reconstruction verification) data may be stored in the same payload message, and [0075] The checksum data for the reconstructed video data that is embedded in the encoded bitstream generally allows the decoder system 180 to verify a correctness of the video reconstruction and report any errors to the receiving user 184, and the checksum data may allow an automated verification of the entire encoding 140-medium 102-decoding 240 processes); wherein no read from memory operation or write to memory operation is performed for the encoded frame data and reference checksum using the GPU throughout the encoding (encoder system of fig. 1, 144 and 146 of fig. 3), checksum determination (128 of fig. 1, 154-158 of fig. 2), and metadata addition operations (124 of fig.1, and 160 of fig. 3). Regarding claim 16, Novotny teaches the system of claim 15, Srinivasan further teaches wherein the first logic comprises at least one of a hardware encoder or a software encoder (figs. 1and 2, encoder system, [0078]). Regarding claim 17, see analysis in claim 5. Regarding claim 18, see analysis in claim 6. Regarding claim 19 see analysis in claim 3. Regarding claim 20, Novotny teaches the system of claim 17, Novotny further teaches wherein the second logic comprises at least one of a hardware decoder or a software decoder (figs. 3 and 4, [0078]). Claim(s) 4 and 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Novotny et al. (US 20050216815 A1) as applied to claim 1 and 9, and further in view of Wang (US 20220217411 A1). Regarding claims 4 and 14, Novotny discloses the method of claims 1 and 19. Novotny is silent about wherein the supplemental metadata comprises a supplemental enhancement information (SEI) message included within a payload of a Network Abstraction Layer (NAL) unit associated with a video coding standard. Wang teaches wherein the supplemental metadata comprises a supplemental enhancement information (SEI) message included within a payload of a Network Abstraction Layer (NAL) unit associated with a video coding standard (bitstream of fig. 7, [0047] A payload type (payloadType) is a syntax element that indicates the type of data contained in a SEI message and hence indicates the type of SEI message that is contained in a SEI NAL unit, [0115] describes SEI within payload of NAL unit with the video coding standard). Taking the teachings of Novotny and Wang together as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the SEI within payload of NAL of Wang into the bitstream of Novotny for improvements in signaling parameters to support coding of multi-layer bitstreams ([0002] of Wang). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to TUNG T VO whose telephone number is (571)272-7340. The examiner can normally be reached Monday-Friday 6:30 AM - 5:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Brian Pendleton can be reached on 571-272-7527. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. TUNG T. VO Primary Examiner Art Unit 2425 /TUNG T VO/Primary Examiner, Art Unit 2425
Read full office action

Prosecution Timeline

Sep 20, 2023
Application Filed
Jan 30, 2025
Non-Final Rejection — §103
Apr 09, 2025
Interview Requested
Apr 21, 2025
Applicant Interview (Telephonic)
Apr 24, 2025
Examiner Interview Summary
May 05, 2025
Response Filed
May 21, 2025
Final Rejection — §103
Jul 23, 2025
Response after Non-Final Action
Aug 25, 2025
Request for Continued Examination
Sep 11, 2025
Response after Non-Final Action
Oct 02, 2025
Non-Final Rejection — §103
Jan 05, 2026
Applicant Interview (Telephonic)
Jan 06, 2026
Response Filed
Jan 21, 2026
Examiner Interview Summary
Jan 29, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603995
Video Coding Using Multi-resolution Reference Picture Management
2y 5m to grant Granted Apr 14, 2026
Patent 12598278
SINGLE 2D DIGITAL IMAGE CAPTURE SYSTEM PROCESSING, DISPLAYING OF 3D DIGITAL IMAGE SEQUENCE
2y 5m to grant Granted Apr 07, 2026
Patent 12593024
HEAD-UP DISPLAY DEVICE
2y 5m to grant Granted Mar 31, 2026
Patent 12593020
SINGLE 2D IMAGE CAPTURE SYSTEM, PROCESSING & DISPLAY OF 3D DIGITAL IMAGE
2y 5m to grant Granted Mar 31, 2026
Patent 12587624
FINAL VIEW GENERATION USING OFFSET AND/OR ANGLED SEE-THROUGH CAMERAS IN VIDEO SEE-THROUGH (VST) EXTENDED REALITY (XR)
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
71%
Grant Probability
86%
With Interview (+15.6%)
3y 2m
Median Time to Grant
High
PTA Risk
Based on 901 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month