Prosecution Insights
Last updated: April 19, 2026
Application No. 18/289,221

APPARATUS FOR DETERMINING VIDEO BASED ON DEPTH INFORMATION AND METHOD THEREOF

Final Rejection §103
Filed
Nov 01, 2023
Examiner
HELCO, NICHOLAS JOHN
Art Unit
2667
Tech Center
2600 — Communications
Assignee
Beyondtech Inc.
OA Round
2 (Final)
72%
Grant Probability
Favorable
3-4
OA Rounds
3y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 72% — above average
72%
Career Allow Rate
26 granted / 36 resolved
+10.2% vs TC avg
Strong +44% interview lift
Without
With
+44.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
24 currently pending
Career history
60
Total Applications
across all art units

Statute-Specific Performance

§101
19.6%
-20.4% vs TC avg
§103
47.1%
+7.1% vs TC avg
§102
16.8%
-23.2% vs TC avg
§112
11.0%
-29.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 36 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Notice to Applicants This action is in response to the amendments and remarks filed on 02/19/2026. Claims 1-3, 5-9, and 11 are pending. Corrective Actions by Applicant Claims 1-2, 5-8, and 11 have been amended. Claims 4 and 10 have been canceled. Response to Arguments The examiner has fully considered Applicant’s presented arguments. On pages 7-9 of the remarks, Applicant argues that the amended claims 1-6 avoid a 35 U.S.C. 112(f) interpretation, and also overcome the 35 U.S.C. 112(a) and 112(b) rejections of claims 1-6. This is persuasive. The 112(f) interpretation of claims 1-6 have been withdrawn. The 112(a) and 112(b) rejections of claims 1-6 have also been withdrawn. On pages 9-15 of the remarks, Applicant argues that Hefeeda fails to disclose every element of amended independent claims 1 and 7. This is persuasive. All previous 35 U.S.C. 102 and 103 rejections have been withdrawn. However, the claim amendments necessitate new 103 rejections below. Claim Objections Claims 1-3, 5-6, and 11 are objected to. Regarding claim 1, the last two lines of claim 1 read “an original database generation circuit configured to store the extracted the basic screen feature information for each frame in the original database”, but should read “an original database generation circuit configured to store the extracted basic screen feature information for each frame in the original database” (emphasis added via bold and strikethrough). Regarding claims 2-3 and 5-6, these claims are objected to based on their dependence on claim 1 above. Regarding claim 11, claim 11 currently depends on claim 10, but claim 10 has been canceled. The interview on 03/16/2026 with Applicant’s representative confirmed that claim 11 was intended to now depend on claim 8. Thus, the claim should be amended to depend on claim 8, and the prior art rejections below will treat claim 11 as depending on claim 8. Appropriate correction is required. Claim Rejections – 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 5, and 7 are rejected under 35 U.S.C. 103 as being unpatentable over Hefeeda et al. (U.K. Publ. GB-2493514-A) in view of Ninan et al. (U.S. Publ. US-2023/0215129-A1). Regarding claim 1, Hefeeda discloses an apparatus for determining a video based on depth information (see figure 6, apparatus 600, processor 601, main memory 602), the apparatus comprising: an original video extraction processor that receives an original multi-view video (see figure 1, step 101 and page 12 of 33, line 25 to page 13 of 33, line 1, where a reference video including 3D content is obtained; see page 13 of 33, lines 7-18, where the video can be a multi-view video), extracts a plurality of frames from the input original multi-view video, extracts basic screen feature information from a basic screen for each extracted frame (see figure 1, steps 103-105 and page 13 of 33, lines 1-6 and page 14 of 33, line 16 to page 17 of 33, line 3, where the reference multi-view video can be processed to extract depth information over several frames to generate a depth signature representing feature information depicted in the frames), and stores the extracted basic screen feature information in an original database for each frame (see figure 1, step 107, signature database 109, and page 15 of 33, lines 3-12, where the depth signature is indexed in a depth signature database 109); a query video extraction processor that receives a query multi-view video (see figure 3, query video), extracts a plurality of frames and depth information screens from the input query multi-view video, extracts depth information screen feature information from the extracted depth information screen for each extracted frame, and stores the extracted depth information screen feature information in a query database for each frame (see figure 3, steps 301-303 and page 19 of 33, lines 1-4, where depth information is extracted from frames of the query video and a depth signature generated as described above for the original video; the depth signature is necessarily stored in a computer memory); and a video determination processor connected to the original database and the query database to receive the basic screen feature information and the depth information screen feature information, the video determination processor comparing the basic screen feature information of the original multi-view video with the depth information screen feature information of the query multi-view video to determine whether the query multi-view video is the same as the original multi-view video (see figure 3, step 305 and page 19 of 33, lines 4-8, where the depth signature of the query video is compared to those in the signature database 109 to determine if a match is found or not; then see figure 3, step 309 and page 19 of 33, lines 7-8, where a visual signature is generated from the query video; then see page 20 of 33, lines 15-24, where the visual signature can be based on features from a keyframe/basic screen; finally see page 20 of 33, line 25 to page 21 of 33, line 3, where the visual signature can be compared to visual signatures of reference videos to determine a match), wherein the original video extraction processor includes: a video frame extraction circuit configured to extract the plurality of frames from the original multi-view video (see figure 1, steps 103-105 and page 13 of 33, lines 1-6 & page 14 of 33, line 16 to page 17 of 33, line 3, where the reference multi-view video can be processed to extract depth information over several frames to generate a depth signature representing feature information depicted in the frames); and an original database generation circuit configured to store the extracted the basic screen feature information for each frame in the original database (see figure 1, step 107, signature database 109, and page 15 of 33, lines 3-12, where the depth signature is indexed in a depth signature database 109). Hefeeda fails to disclose a basic screen feature information extraction circuit configured to: extract from the original multi-view video, the basic screen which is a reference frame including feature information of the original multi-view video; and extract the basic screen feature information which is a geometric profile of an object located in the extracted basic screen, wherein the geometric profile of the object includes an object type, an object position, an object outline, an object shape, and an object size. Pertaining to the same field of endeavor, Ninan discloses a basic screen feature information extraction circuit configured to: extract from the original multi-view video, the basic screen which is a reference frame including feature information of the original multi-view video (see paragraphs 0052-0053, where saliency video streams can encode feature information from a reference view/basic screen; paragraphs 0077-0079 specify that a saliency stream represents one of many reference views, and that the system extracts the image data therefrom); and extract the basic screen feature information which is a geometric profile of an object located in the extracted basic screen (see paragraphs 0067 and 0115), wherein the geometric profile of the object includes an object type (see paragraph 0062, where types and importance of objects in the saliency region can be ranked), an object position (see paragraphs 0067 and 0115, where positions or coordinates of the objects in the saliency region can be extracted), an object outline; an object shape (see paragraphs 0092 and 0115, where the shape of the saliency region can be extracted, and optionally classified into predetermined shapes with known outlines, such as rectangles or other regular shapes), and an object size (see paragraphs 0067 and 0115, where sizes and scaling factors of the saliency region can also be extracted). Hefeeda and Ninan are considered analogous art, as they are both directed to processing and storage of multi-view video data. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have integrated the teachings of Ninan into Hefeeda because the saliency region representation reduces file size, while improving comparability (see Ninan paragraphs 0053 and 0082-0084). Regarding claim 5, Hefeeda in view of Ninan discloses wherein the query video extraction processor includes a video frame extraction circuit that receives the query multi-view video and extracts the plurality of frames from the input query multi-view video; a depth information screen extraction circuit that extracts the depth information screens in which depth information is stored from the input query multi-view video; and a query database generation circuit that extracts the depth information screen feature information from the extracted depth information screen for each frame and stores the depth information screen feature information for each frame in the query database (see Hefeeda figure 3, steps 301-303 and page 19 of 33, lines 1-4, where depth information is extracted from frames of the query video and a depth signature generated as described above for the original video; the depth signature is necessarily stored in a computer memory). Regarding claim 7, Hefeeda discloses a method for determining a video based on depth information, the method comprising (see figures 1-3). The remainder of claim 7 recites steps identical to those of claim 1. Therefore, Hefeeda in view of Ninan discloses claim 7 as applied to claim 1 above. Claims 2 and 8 are rejected under 35 U.S.C. 103 as being unpatentable over Hefeeda et al. (U.K. Publ. GB-2493514-A) in view of Ninan et al. (U.S. Publ. US-2023/0215129-A1), and further in view of Kroon (U.S. Publ. US-2022/0394229-A1). Regarding claim 2, Hefeeda in view of Ninan discloses wherein the query multi-view video extraction processor calculates each of difference values between the basic screens in a plurality of multi-view videos captured by a plurality of cameras installed at different locations, sums up the calculated difference values (see page 14 of 33, lines 1-28, where the depth information for stereo or multi-view videos can be generated by calculating the disparity across frames taken from different positions and combining said disparity information by addition), Hefeeda in view of Ninan fails to disclose and then compresses a sum to generate the depth information screen. Pertaining to the same field of endeavor, Kroon discloses and then compresses a sum to generate the depth information screen (see paragraph 0019, where depth maps can be compressed; paragraph 0031 specifies that the depth data is obtained from disparity/difference values). Hefeeda and Kroon are considered analogous art, as they are both directed to processing and storage of 3D video data. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have integrated the teachings of Kroon into Hefeeda and Ninan because efficient encoding of depth data is desirable for transmission (see Kroon paragraph 0005). Regarding claim 8, Hefeeda in view of Ninan and Kroon discloses claim 8 as applied to claim 2 above. Claims 3 and 9 are rejected under 35 U.S.C. 103 as being unpatentable over Hefeeda et al. (U.K. Publ. GB-2493514-A) in view of Ninan et al. (U.S. Publ. US-2023/0215129-A1), Kroon (U.S. Publ. US-2022/0394229-A1), and further in view of Lin et al. (U.S. Publ. US-2014/0241434-A1). Regarding claim 3, Hefeeda in view of Ninan fails to disclose the limitations of claim 3. Pertaining to the same field of endeavor, Lin discloses wherein the depth information screen has a smaller size and capacity than those of the basic information screen (see paragraphs 0004 and 0007, where the depth data of a 3D/multi-view video has a lower spatial resolution than the associated texture data). Hefeeda and Lin are considered analogous art, as they are both directed to processing of 3D and multi-view video data. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have integrated the teachings of Lin into Hefeeda, Ninan, and Kroon because doing so allows for improving multi-view video coding efficiency and transmission bandwidth (see Lin paragraph 0003). Regarding claim 9, Hefeeda in view of Ninan, Kroon, and Lin discloses claim 9 as applied to claim 3 above. Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Hefeeda et al. (U.K. Publ. GB-2493514-A) in view of Ninan et al. (U.S. Publ. US-2023/0215129-A1), and further in view of Kvochko (U.S. Patent US-11368289-B1). Regarding claim 6, Hefeeda in view of Ninan discloses wherein the video determination processor includes a feature information input circuit that receives input of the generated original database and query database; a similarity determination circuit that compares and determines a similarity by applying the input original database and query database to a pre-learned similarity model (see Hefeeda figure 3, step 305 and page 19 of 33, lines 4-8, where the depth signature of the query video is compared to those in the signature database 109 to determine if a match is found or not; then see figure 3, step 309 and page 19 of 33, lines 7-8, where a visual signature is generated from the query video; then see page 20 of 33, lines 15-24, where the visual signature can be based on features from a keyframe/basic screen; finally see page 20 of 33, line 25 to page 21 of 33, line 3, where the visual signature can be compared to visual signatures of reference videos to determine a match; page 10 of 33, lines 1-14 specify that a comparison engine/pre-learned similarity model is used to perform the matching); Hefeeda in view of Ninan fails to disclose and a providing circuit that provides, through a blockchain network, whether the query multi-view video is the same as the original multi-view video as a result of the similarity determination. Pertaining to the same field of endeavor, Kvochko discloses and a providing circuit that provides, through a blockchain network, whether the query multi-view video is the same as the original multi-view video as a result of the similarity determination (see column 1, lines 42-63, where a submitted/query video can be compared to a stored/original video in a blockchain network, and the user is then notified of the video comparison results). Hefeeda and Kvochko are considered analogous art, as they are both directed to comparison and storage of video data. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have integrated the teachings of Kvochko into Hefeeda and Ninan because doing so allows for storage of original videos in tamper-proof form for comparison (see Kvochko column 1, lines 42-63). Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Hefeeda et al. (U.K. Publ. GB-2493514-A) in view of Ninan et al. (U.S. Publ. US-2023/0215129-A1) and Kroon (U.S. Publ. US-2022/0394229-A1), and further in view of Kvochko (U.S. Patent US-11368289-B1). Regarding claim 11, Hefeeda in view of Ninan and Kroon discloses wherein the determining of whether the query multi-view video is the same as the original multi- view video includes: receiving input of the generated original database and query database; comparing and determining a similarity by applying the input original database and query database to a pre-learned similarity model (see Hefeeda figure 3, step 305 and page 19 of 33, lines 4-8, where the depth signature of the query video is compared to those in the signature database 109 to determine if a match is found or not; then see figure 3, step 309 and page 19 of 33, lines 7-8, where a visual signature is generated from the query video; then see page 20 of 33, lines 15-24, where the visual signature can be based on features from a keyframe/basic screen; finally see page 20 of 33, line 25 to page 21 of 33, line 3, where the visual signature can be compared to visual signatures of reference videos to determine a match; page 10 of 33, lines 1-14 specify that a comparison engine/pre-learned similarity model is used to perform the matching); Hefeeda in view of Ninan and Kroon fails to disclose and providing, through a blockchain network, whether the query multi- view video is the same as the original multi-view video as a result of the similarity determination. Pertaining to the same field of endeavor, Kvochko discloses and providing, through a blockchain network, whether the query multi- view video is the same as the original multi-view video as a result of the similarity determination (see column 1, lines 42-63, where a submitted/query video can be compared to a stored/original video in a blockchain network, and the user is then notified of the video comparison results). Hefeeda and Kvochko are considered analogous art, as they are both directed to comparison and storage of video data. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have integrated the teachings of Kvochko into Hefeeda, Ninan, and Kroon because doing so allows for storage of original videos in tamper-proof form for comparison (see Kvochko column 1, lines 42-63). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to NICHOLAS JOHN HELCO whose telephone number is (703)756-5539. The examiner can normally be reached on Monday-Friday from 9:00 AM to 5:00 PM. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Bella, can be reached at telephone number 571-272-7778. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from Patent Center. Status information for published applications may be obtained from Patent Center. Status information for unpublished applications is available through Patent Center for authorized users only. Should you have questions about access to Patent Center, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/uspto-automated- interview-request-air-form. /NICHOLAS JOHN HELCO/Examiner, Art Unit 2667 /MATTHEW C BELLA/Supervisory Patent Examiner, Art Unit 2667
Read full office action

Prosecution Timeline

Nov 01, 2023
Application Filed
Dec 01, 2025
Non-Final Rejection — §103
Feb 19, 2026
Response Filed
Mar 16, 2026
Examiner Interview (Telephonic)
Mar 18, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602867
METHOD FOR AUTONOMOUSLY SCANNING AND CONSTRUCTING A REPRESENTATION OF A STAND OF TREES
2y 5m to grant Granted Apr 14, 2026
Patent 12597092
Systems and Methods for Altering Images
2y 5m to grant Granted Apr 07, 2026
Patent 12586370
VEHICLE IMAGE ANALYSIS SYSTEM FOR A PERIPHERAL CAMERA
2y 5m to grant Granted Mar 24, 2026
Patent 12573018
DEFECT ANALYSIS DEVICE, DEFECT ANALYSIS METHOD, NON-TRANSITORY COMPUTER-READABLE MEDIUM, AND LEARNING DEVICE
2y 5m to grant Granted Mar 10, 2026
Patent 12561754
METHOD AND SYSTEM FOR PROCESSING IMAGE BASED ON WEIGHTED MULTIPLE KERNELS
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
72%
Grant Probability
99%
With Interview (+44.4%)
3y 1m
Median Time to Grant
Moderate
PTA Risk
Based on 36 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month