Prosecution Insights
Last updated: April 19, 2026
Application No. 18/852,306

FRAME BUFFER USAGE DURING A DECODING PROCESS

Final Rejection §103
Filed
Sep 27, 2024
Examiner
VO, TUNG T
Art Unit
2425
Tech Center
2400 — Computer Networks
Assignee
V-NOVA INTERNATIONAL LTD
OA Round
2 (Final)
71%
Grant Probability
Favorable
3-4
OA Rounds
3y 2m
To Grant
86%
With Interview

Examiner Intelligence

Grants 71% — above average
71%
Career Allow Rate
639 granted / 901 resolved
+12.9% vs TC avg
Strong +16% interview lift
Without
With
+15.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
20 currently pending
Career history
921
Total Applications
across all art units

Statute-Specific Performance

§101
5.4%
-34.6% vs TC avg
§103
47.3%
+7.3% vs TC avg
§102
28.0%
-12.0% vs TC avg
§112
3.4%
-36.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 901 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 25-42 is/are rejected under 35 U.S.C. 103 as being unpatentable over Handford (US 20190313109 A1) in view of Cismas (US 10390010 B1). Regarding claim 25, Handford discloses a decoder apparatus implemented as a dedicated hardware circuit (figure 8), wherein the decoder apparatus comprises a data communication link for communication with a memory (803, 804, and 805 of fig. 8), the decoder apparatus comprising: a processor ([0134-0235]); a non-transitory storage device ([0134-0135]) that stores computer executable instructions that, when executed by the processor, cause the decoder apparatus for perform a method of using a frame buffer ([0078 and 0083] a buffer) during a decoding process (figures 4A and 4B), wherein the method is performed on a dedicated hardware circuit (fig. 8), and the method comprises: using a frame buffer to store data representative of a first frame data ([0078] and [0083] a buffer for storing the spatial correlation elements, 424 of fig. 4B; [0036] the row and column elements of an image or a frame from a sequence of images or frames making up the video signal, and the spatial correlation elements have the row and column elements of the image or frame), wherein the data representative of the first frame data is used when processing a second frame data (424 of fig. 4B, the spatial correlation elements (t0) as the first frame data is used to produce the spatial correlation elements (t1), 418 of fig. 4B, as a second frame data); wherein: the data representative of a first frame data is a set of transformed elements indicative of an extent of spatial correlation in the first frame data ([0048] A set of spatial correlation elements 218 is generated using the set of residual elements 216. The term “spatial correlation element” is used herein to indicate an element that is indicative of an extent, or measure, of spatial correlation between a plurality of residual elements in the set of residual elements 216. The correlation elements in the set of spatial correlation elements 218 may also be referred to as “coefficients”, “spatial coefficients” or “transformed elements”. The set of spatial correlation elements 218 is associated with the first time sample, t.sub.1, of the signal; 424 of fig. 4B, spatial correlation elements (t0)); the method compresses the set of transformed elements using a lossless compression technique ([0039]-[0040] H. 264 encoding and H. 264 decoding would obviously encompass a lossless compression that is well-known in the art) and sends the compressed set of transformed elements to the frame buffer (424 of figs. 4A and 4B, the compressed set of transformed elements are sent to the buffer, [0078 and 0083]) for retrieval when processing the second frame data (418 of fig. 4B, [0083 and 0084]). It is noted that Handford is silent about the frame buffer is stored in memory external to the dedicated hardware circuit. Cismas (US 10390010 B1) teaches the frame buffer is stored in memory external to the dedicated hardware circuit (Col. 4, lines 19-20 and Col. 11, lines 61-64). Taking the teachings of Handford and Cismas together as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the external memory of Cismas into the hardware circuit of Handford for the allocation FIFO entries may use only the UnitID fields, so the FIFO overhead is reduced. Regarding claim 26, Handford and Cismas teach the method of claim 25, Cismas further teaches wherein the retrieval of the set of transformed elements from the frame buffer comprises performing an inverse lossless compression technique to the compressed set of transformed elements (62 of fig. 2-A, lossless 2D encoder). Regarding claim 27, Handford and Cismas teach the method of claim 26, Handford further teaches wherein the first frame data comprises a first set of residual elements (416 of figs. 4A and 4B). Regarding claim 28, Handford and Cismas teach the method of claim 26, Handford further teaches wherein the first set of residual elements are based on a difference between a first rendition of a first frame associated with the first frame data at a first level of quality in a tiered hierarchy having multiple levels of quality and a second rendition of the first frame at the first level of quality (406 and 414 of fig. 4A, [0076] the first apparatus 402 obtains a set of residual elements 416 by comparing the input data 406 with the upsampled data 414; [0047] a given residual element is obtained by subtracting a value of a signal element in the upsampled data from a value of a corresponding signal element in the input data ). Regarding claim 29, Handford and Cismas teach the method of claim 27, Handford further teaches wherein, the set of transformed elements indicate the extent of spatial correlation between the first set of residual elements such that the set of transformed elements indicate at least one of average, horizontal, vertical and diagonal relationship between neighboring residual elements in the set of residual elements ([0077] a horizontal, vertical and/or diagonal similarity and/or “tilt” between neighbouring residual elements in the set of residual elements 416). Regarding claim 30, Handford and Cismas teach the method of any of claim 27, Handford further teaches wherein the method comprises receiving a first input data, wherein the first input data is indicative of an extent of temporal correlation between the set of transformed elements and a second set of transformed elements ([0006] first input data based on a set of spatio-temporal correlation elements and second input data based on a rendition of a first time sample of a signal at a relatively low level of quality in a tiered hierarchy having multiple levels of quality, [0141] a set of spatio-temporal correlation elements may be indicative of an extent of temporal correlation between first reference data based on a first rendition of a first time sample of a signal and second reference data based on a rendition of a second time sample of the or another signal). Regarding claim 31, Handford and Cismas teach the method of claim 30, Handford further teaches wherein the second set of transformed elements are indicative of an extent of spatial correlation in a second set of residual elements ([0146] a second set of spatial correlation elements indicative of an extent of spatial correlation between a second set of residual elements associated with a second time sample of the signal, the data processing apparatus 402 obtains a first set of residual elements associated with the first time sample of the signal and a second set of residual elements associated with the second time sample of the signal, generates a set of temporal correlation elements indicative of an extent of temporal correlation between the first set of residual elements and the second set of residual elements). Regarding claim 32, Handford and Cismas teach the method of claim 31, Handford further teaches wherein the second set of residual elements are for reconstructing a rendition of a second frame associated with the second frame data at the first level of quality using data based on a rendition of the second frame at the second level of quality (414 and 416 of fig. 4B, [0078 and 0085], [0147]). Regarding claim 33, Handford and Cismas teach the method of claim 31, Handford further teaches wherein the second set of residual elements are based on a difference between a first rendition of the second frame at the first level of quality in a tiered hierarchy having multiple levels of quality and a second rendition of the second frame at the first level of quality ([0079]). Regarding claim 34, Handford and Cismas teach the method of claim 31, Handford further teaches wherein the second set of transformed elements indicate the extent of spatial correlation between the plurality of residual elements in the second set of residual elements associated with the second frame such that the second set of transformed elements indicate at least one of an average, horizontal, vertical and diagonal relationship between neighboring residual elements in the second set of residual elements ([0077] At least one correlation element in the first set of spatial correlation elements 418 may, for example, indicate a horizontal, vertical and/or diagonal similarity and/or “tilt” between neighbouring residual elements in the set of residual elements 416. The first set of spatial correlation elements 418 exploits spatial correlation but not temporal correlation at the higher, residual level). Regarding claim 35, Handford and Cismas teach the method of claim 30, Handford further teaches wherein the method comprises combining the first input data with the set of transformed elements to generate the second set of transformed elements (424 and 426 of fig. 4B, [0084]). Regarding claim 36, Handford and Cismas teach the method of claim 35, Handford further teaches wherein the method comprises performing an inverse transformation operation on the second set of transformed elements to generate the second set of residual elements ([0096]). Regarding claim 37, Handford and Cismas teach the method of claim 36, Handford further teaches wherein the method comprises receiving a second input data, wherein the second input data is at the second level of quality in the tiered hierarchy, the second level being lower than the first level ([0006, 0016, 0082, and 0143] the decoder device receives input data comprising first input data based on a set of correlation elements and second input data based on a rendition of a first time sample of a signal at a relatively low level of quality in a tiered hierarchy having multiple levels of quality). Regarding claim 38, Handford and Cismas teach the method of claim 37, Handford further teaches wherein the method comprises performing an upsampling operation on the second input data to generate a second rendition of the second frame at the first level of quality ([0076, 0082, and 0085]). Regarding claim 39, Handford and Cismas teach the method of claim 38, Handford further teaches wherein the method comprises combining the second rendition of the second frame and the second set of residual elements to reconstruct the second frame ([0085]). Regarding claim 40, Handford and Cismas teach the method of claim 30, Handford further teaches wherein the first input data comprises a quantized version of a result of a difference between the set of transformed elements and the second set of transformed elements ([0109 and 0144]). Regarding claim 41, Handford and Cismas teach the method of claim 30, Handford further teaches wherein the set of transformed elements are associated with an array of signal elements in the first frame and wherein the second set of transformed elements are associated with an array of signal elements in the second frame at the same spatial position as the array of signal elements in the first frame ([0036 and 0048]). Regarding claim 42, Handford and Cismas teach the method of claim 25, Cismas further teaches wherein the lossless compression technique comprises two different lossless compression techniques (58 and 62 of fig. 2-A, Col. 4, lines 25-33). Claim(s) 43 is/are rejected under 35 U.S.C. 103 as being unpatentable over Handford (US 20190313109 A1) in view of Cismas (US 10390010 B1) as applied to claim 25, and further in view of Lim (US 20150381993 A1). Regarding claim 43, Handford and Cismas teach the method of claim 25, Handford and Cismas do not teach the wherein the lossless compression technique comprises at least one of run length encoding and Huffman encoding or wherein the lossless compression technique comprises run length encoding followed by Huffman encoding). Lim teaches wherein the lossless compression technique comprises at least one of run length encoding and Huffman encoding or wherein the lossless compression technique comprises run length encoding followed by Huffman encoding ([0002]). Taking the teachings of Handford, Cismas, and Lim together as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the run length encoding and Huffman encoding of Lim into the decoding method of Handford and Cismas to efficiently encode the bits representing the selected transform. Claim(s) 44 is/are rejected under 35 U.S.C. 103 as being unpatentable over Handford (US 20190313109 A1) in view of Cismas (US 10390010 B1) as applied to claim 25, and further in view of Luo (US 20190261010 A1). Regarding claim 44, Handford and Cismas teach the method of any preceding claim 25, Handford and Cismas do not teach wherein the decoding process is configured to decode a video signal, wherein the video signal is at least an 8K 60 FPS video signal. Luo (US 20190261010 A1) teaches wherein the decoding process is configured to decode a video signal, wherein the video signal is at least an 8K 60 FPS video signal ([0030] a large screen television 116 may receive the video sequence formatted for HEVC 4K or 8K 60 fps video). Taking the teachings of Handford, Cismas, and Luo together as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the video signal with 8K 60fps of Luo into the decoding process of Handford in view of Cismas to provide higher resolution video and better frame rate such as 8K and 60fps to a particular display. Response to Arguments Applicant's arguments filed 02/23/2026 have been fully considered but they are not persuasive. The applicant argues that it is the initial burden of the PTO to demonstrate a prima facie case of obviousness. If the PTO does not set forth a prima facie case of obviousness, the Applicant is under no obligation to submit evidence of non-obviousness. MPEP 2142 (emphasis added). The pending claims require, inter alia, using a frame buffer to store data representative of a first frame, where that data is a set of transformed elements indicative of spatial correlation, the transformed elements being losslessly compressed, stored in memory external to a dedicated hardware circuit, and later retrieved when processing a subsequent frame. Applicant submits that the Office Action has not demonstrated a prima facie case of obviousness because the cited references, individually or in combination, do not teach or suggest this claimed architecture or workflow. In response to applicant’s argument that there is no teaching, suggestion, or motivation to combine the references, the examiner recognizes that obviousness may be established by combining or modifying the teachings of the prior art to produce the claimed invention where there is some teaching, suggestion, or motivation to do so found either in the references themselves or in the knowledge generally available to one of ordinary skill in the art. See In re Fine, 837 F.2d 1071, 5 USPQ2d 1596 (Fed. Cir. 1988), In re Jones, 958 F.2d 347, 21 USPQ2d 1941 (Fed. Cir. 1992), and KSR International Co. v. Teleflex, Inc., 550 U.S. 398, 82 USPQ2d 1385 (2007). In this case, Handford teaches using a frame buffer to store data representative of a first frame data ([0078] and [0083] a buffer, and 424 of fig. 4B for spatial correlation elements (t0) are considered a first frame data), where that data is a set of transformed elements indicative of spatial correlation (424 of fig. 4B for the spatial correlation element (t0) that is treated as a first frame data, and 418 of fig. 4B for the spatial correlation elements (t1) that is treated as a second frame data; 424 and 418 of fig. 4B and [0048] teach a set of transformed elements indicative of spatial correlation; [0036] the set of transformed elements as the set of row and columns elements of an image or frame from a sequence of images or frames making up the video signal), the transformed elements being losslessly compressed ([0039]-[0040] H.264 encoding and H.264 decoding would obviously include a lossless compression that is well known in the art. To support the well-known, see McGowan et al. (US 20100118958 A1), [0019] In the H.264 standard encoder, for example, the Context-Adaptive Variable Length Coding (CAVLC) entropy coder (which is also familiar to those skilled in the art) is used as a lossless compression method well suited for block-based video coding. In a typical case, the error is a quantized difference between the discrete cosine transform (DCT) coefficients of the predicted pixels and the DCT coefficients of the actual pixels. (The use of DCT coefficients in video coding is also fully familiar to those of ordinary skill in the art.) In general, the encoding is more efficient if there are many differences which are equal to zero, but it is still highly efficient if the (absolute value of the) difference of select non-zero terms is equal to 1. An occasional absolute difference greater than 1 in these select terms breaks the efficiency of the entropy coder and requires a disproportionately large number of bits to encode). Hanford suggests combination of any features in the embodiments, equivalents and modifications that would be made ([0033] and [0171] modifications of the video encoder and decoder for the savings may be particularly relevant where the signal data corresponds to high quality video data, where the amount of information transmitted in known systems can be especially high). Cismas teaches the compressed video stored in memory external to a dedicated hardware circuit (60 of fig. 2-A and 168 of fig. 2-B; Col. 4, lines 19-20 and Col. 11, lines 61-64, the frame storage units 60, 168 may be provided off-die, in external memory), and later retrieved when processing a subsequent frame (68 of fig. 2-A and 168 of fig. 2-B storing the compressed video to be retrieved for the video decoder to perform a decoding process). Cismas suggests the video compression using a lossless encoder and decoder (Col. 4, lines 8-33) and the disclosed embodiments would be altered in many ways for video compression and decompression (Col. 12, lines 13-16). In view of the suggested teachings of Handford and Cismas above, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to Handford and Cismas to provide an improvement of the encoder and decoder. The applicant further argues that Handford does not disclose lossless compression of those elements. To the contrary, Handford repeatedly teaches quantization, rate-distortion analysis, and selection between different correlation representations, all of which are inherently lossy coding techniques. Nowhere does Handford describe compressing correlation elements in a manner that preserves exact values for later inverse reconstruction, nor does the reference describe performing an inverse lossless decompression operation. The absence of any disclosure of run-length encoding, Huffman encoding, or any other expressly lossless compression mechanism is dispositive. Treating Handford's lossy coding operations as "lossless compression" is an unsupported reinterpretation of the reference and does not satisfy the claim limitation. The examiner strongly disagrees that the applicant. It is submitted that Hanford teaches the video compression using H.264 encoding and H. 264 decoding ([0039]-[0040] H.264 encoding and H.264 decoding would obviously include a lossless compression that is well known in the art. To support the well-known, see McGowan et al. (US 20100118958 A1), [0019] In the H.264 standard encoder, for example, the Context-Adaptive Variable Length Coding (CAVLC) entropy coder (which is also familiar to those skilled in the art) is used as a lossless compression method well suited for block-based video coding. In a typical case, the error is a quantized difference between the discrete cosine transform (DCT) coefficients of the predicted pixels and the DCT coefficients of the actual pixels. (The use of DCT coefficients in video coding is also fully familiar to those of ordinary skill in the art.) In general, the encoding is more efficient if there are many differences which are equal to zero, but it is still highly efficient if the (absolute value of the) difference of select non-zero terms is equal to 1. An occasional absolute difference greater than 1 in these select terms breaks the efficiency of the entropy coder and requires a disproportionately large number of bits to encode). The applicant further argues that Handford also fails to disclose the claimed frame buffer. While Handford may use internal buffers as part of a coding pipeline, those buffers store intermediate data for immediate processing and are not described as a frame buffer that stores data representative of a first frame for reuse when processing a second frame. The claims require a temporal persistence model in which compressed transformed elements from frame n are stored and later retrieved to assist decoding of frame n+1. Handford's buffers do not perform this function and are not described as frame-level storage structures at all. The Office Action's reliance on generic "buffers" conflates transient working memory with the specifically claimed frame buffer architecture. The examiner strongly disagrees with the applicant. It is submitted that Handford teaches using a frame buffer ([0078] a buffer is treated a frame buffer for storing a frame data) to store data representative of a first frame data ([0078] a second set of spatial correlation elements, 424 of fig. 4A, [0036] the second set of spatial correlation elements comprises one or more rows and column of signal elements and are all or part of an image or frame from a sequence of image for frames making up the video signal) representative of the first frame data is used when processing a second frame data (426 of fig. 4). Handford disclose buffer that has a function for storing the frame data. The applicant further argues although Cismas discloses the use of external memory, that disclosure is limited to reorder buffers for video block data. Cismas does not disclose storing transformed spatial-correlation elements, does not disclose losslessly compressing such elements, and does not disclose storing them as data representative of a frame for later retrieval during processing of a subsequent frame. The external memory in Cismas serves a fundamentally different purpose, namely managing block ordering and memory utilization and does not remedy the deficiencies of Handford with respect to either lossless compression or frame-level reuse of correlation data. The examiner strongly disagrees with the applicant. It is submitted that Cismas the use of external memory (68 of fig. 2-A and 168 of fig. 2-B, Col. 4, lines 8-33) for storing compressed video frames, wherein the video frame is compressed by the lossless compression (Col. 4, lines 8-33, a lossless 2D encoder unit) to provide transformed spatial-correlation elements (Col. 8, lines 65-67, some correlation is applied; Col. 11, line 65-Col. 12, line 1). Cismas does disclose losslessly compressing such elements (Col. 7, lines 8-33, a lossless 2D decoder unit and a lossless 2D encoder unit to perform video coding according to one or more video coding standards such as MPEG-2, H.264 (MPEG 4 AVC), and/or H.265 (HEVC, or High Efficiency Video Coding), and does disclose storing them as data representative of a frame for later retrieval during processing of a subsequent frame (68 of fig. 2-A and 168 of fig. 2-B). The applicant further argues that the Office Action's combination of Handford and Cismas therefore relies on hindsight. The rejection assumes that a person of ordinary skill would extract correlation elements from Handford, apply lossless compression not disclosed in Handford, store the compressed elements in Cismas's external reorder buffer despite its different function, and then retrieve those elements for cross-frame decoding without any teaching or suggestion in the cited references to do so. This sequence mirrors Applicant's claimed invention rather than flowing from the references themselves. The examiner strongly disagrees with the applicant. It is submitted that Handford and Cismas both teach the video compression using lossless compression ([0039]-[0040]of Handford, using H. 264 for encoding and decoding that does not exclude the lossless compression; Col. 4, lines 8-33 of Cismas, perform video coding using lossless compression according to one or more video coding standards such as MPEG-2, H.264 (MPEG 4 AVC), and/or H.265 (HEVC, or High Efficiency Video Coding)). Handford and Cismas both teach buffer for storing transformed spatial-correlation elements ([0036] and [0078] of Handford; see Cismas: 68 of fig. 2-A and 168 of fig. 2-B, Col. 4, lines 8-33, Col. 8, lines 65-67, some correlation is applied; Col. 11, line 65-Col. 12, line 1). Handford and Cismas both teach the modifications and motivations ([0033] and [0171] of Handford; Col. 12, lines 13-16 of Cismas). In view of the discussions above, Handford and Cismas do teach and suggest (i) lossless compression of transformed spatial-correlation elements, (ii) storing those compressed elements in a frame buffer. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to TUNG T VO whose telephone number is (571)272-7340. The examiner can normally be reached Monday-Friday 6:30 AM - 5:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Brian Pendleton can be reached at 571-272-7527. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. TUNG T. VO Primary Examiner Art Unit 2425 /TUNG T VO/Primary Examiner, Art Unit 2425
Read full office action

Prosecution Timeline

Sep 27, 2024
Application Filed
Oct 21, 2025
Non-Final Rejection — §103
Feb 23, 2026
Response Filed
Mar 18, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603995
Video Coding Using Multi-resolution Reference Picture Management
2y 5m to grant Granted Apr 14, 2026
Patent 12598278
SINGLE 2D DIGITAL IMAGE CAPTURE SYSTEM PROCESSING, DISPLAYING OF 3D DIGITAL IMAGE SEQUENCE
2y 5m to grant Granted Apr 07, 2026
Patent 12593024
HEAD-UP DISPLAY DEVICE
2y 5m to grant Granted Mar 31, 2026
Patent 12593020
SINGLE 2D IMAGE CAPTURE SYSTEM, PROCESSING & DISPLAY OF 3D DIGITAL IMAGE
2y 5m to grant Granted Mar 31, 2026
Patent 12587624
FINAL VIEW GENERATION USING OFFSET AND/OR ANGLED SEE-THROUGH CAMERAS IN VIDEO SEE-THROUGH (VST) EXTENDED REALITY (XR)
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
71%
Grant Probability
86%
With Interview (+15.6%)
3y 2m
Median Time to Grant
Moderate
PTA Risk
Based on 901 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month