Prosecution Insights
Last updated: April 19, 2026
Application No. 18/773,454

Multi-Baseline Camera Array System Architectures for Depth Augmentation in VR/AR Applications

Final Rejection §103
Filed
Jul 15, 2024
Examiner
LIMA, FABIO S
Art Unit
2486
Tech Center
2400 — Computer Networks
Assignee
Adeia Imaging LLC
OA Round
2 (Final)
77%
Grant Probability
Favorable
3-4
OA Rounds
2y 1m
To Grant
92%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
319 granted / 415 resolved
+18.9% vs TC avg
Moderate +15% lift
Without
With
+14.8%
Interview Lift
resolved cases with interview
Fast prosecutor
2y 1m
Avg Prosecution
32 currently pending
Career history
447
Total Applications
across all art units

Statute-Specific Performance

§101
2.7%
-37.3% vs TC avg
§103
45.8%
+5.8% vs TC avg
§102
19.1%
-20.9% vs TC avg
§112
19.7%
-20.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 415 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Terminal Disclaimer The terminal disclaimer filed on December 23, 2025 disclaiming the terminal portion of any patent granted on this application which would extend beyond the expiration date of Patent Number 11,368,662 has been reviewed and is accepted. The terminal disclaimer has been recorded. Response to Arguments The objection of claims 10-12 has been withdrawn. The rejection under 35. U.S.C 112 (b) of claims 10-12 has been withdrawn. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 12, 17-23, 27 and 28 are rejected under 35 U.S.C. 103 as being unpatentable over Bradski et al. (US20150178939A1), hereinafter referred to as Bradski, in view of McCulloch et al. (US20130286004A1), hereinafter referred to as McCulloch. Regarding claim 1, Bradski discloses immersive headset, comprising (¶¶ [0043]-[0044]): a display configured to render immersive content selected from the group consisting of virtual reality content, mixed reality content, and augmented reality content (¶¶ [0043]- [0045]); primary system of cameras (¶ [0323[), wherein the primary system of cameras comprises: a first plurality of cameras, and configured to capture a first set of image data (¶¶ [0323[ and [0327]); and a second plurality of cameras, and configured to capture a second set of image data (¶¶ [0323[ and [0327]); derive a set of depth information (¶[0327]) Bradski does not explicitly disclose located in a right area of the immersive headset corresponding to a field of view region and located in a left area of the immersive headset corresponding to the field of view region; a memory, storing image processing instructions; and at least one processor configured to execute the image processing instructions; corresponding to the field of view region based on the first set of image data and the second set of image data; and render the immersive content based on the set of depth information, wherein the set of depth information is used to determine where to render an individual virtual object on at least one image of the immersive content. However, McCulloch from the same or similar endeavor of image system discloses located in a right area of the immersive headset corresponding to a field of view region and located in a left area of the immersive headset corresponding to the field of view region (¶¶[0037], [0038], [0047]-[0050] and [0061]); at least one processor configured to execute the image processing instructions (¶[0051]); corresponding to the field of view region based on the first set of image data and the second set of image data (¶¶ [0048]-[0050]); render the immersive content based on the set of depth information, wherein the set of depth information is used to determine where to render an individual virtual object on at least one image of the immersive content (¶¶ [0004] and [0031]-0034]) It would have been obvious to the person of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings disclosed by Bradski to add the teachings of McCulloch as above, in order to capture video and still images, typically in color, of the real world to map real objects in the display field of view of the see-through display, and hence, in the field of view of the user (McCulloch, [0047]). Regarding claim 12, Bradski and McCulloch disclose all the limitations of claim 1, and is analyzed as previously discussed with respect to that claim. Bradski does not explicitly disclose the immersive headset of claim 1, wherein the individual virtual object is rendered, such that the individual virtual object is appropriately occluded by a real world object visible through the display. However, McCulloch from the same or similar endeavor of image system discloses the immersive headset of claim 1, wherein the individual virtual object is rendered, such that the individual virtual object is appropriately occluded by a real world object visible through the display (¶¶ [0060] and [0096]). The motivation for combining Bradski and McCulloch has been discussed in connection with claim 1, above. Regarding claim 17, Bradski and McCulloch disclose all the limitations of claim 1, and is analyzed as previously discussed with respect to that claim. Furthermore, Bradski discloses the immersive headset of claim 1, further comprising at least one additional sensor (¶¶ [0302], [0320] and [0324]). Regarding claim 18, Bradski and McCulloch disclose all the limitations of claim 1, and is analyzed as previously discussed with respect to that claim. Furthermore, Bradski discloses the immersive headset of claim 17, wherein the at least one additional sensor comprises a secondary system of one or more cameras, positioned towards an additional region, and configured to capture a secondary set of image data (¶ [03230]). Regarding claim 19, Bradski and McCulloch disclose all the limitations of claim 18, and is analyzed as previously discussed with respect to that claim. Furthermore, Bradski discloses the immersive headset of claim 18, further comprising at least one illumination light source configured to project infrared light, wherein the secondary system of one or more cameras is: sensitive to infrared wavelengths (¶ [0323]). Bradski does not explicitly disclose configured to capture the secondary set of image data based, at least in part, on the infrared light projected over the additional region. However, McCulloch from the same or similar endeavor of image system discloses the configured to capture the secondary set of image data based, at least in part, on the infrared light projected over the additional region (¶¶ [0052] and [0058]). The motivation for combining Bradski and McCulloch has been discussed in connection with claim 1, above. Regarding claim 20, Bradski and McCulloch disclose all the limitations of claim 18, and is analyzed as previously discussed with respect to that claim. Furthermore, Bradski discloses the immersive headset of claim 18, wherein the at least one processor is further configured to execute the image processing instructions to determine a set of feature data corresponding to the additional region by performing feature tracking based on the secondary set of image data (¶¶ [0320]-[0323]). The motivation for combining Bradski and McCulloch has been discussed in connection with claim 1, above. Regarding claim 21, Bradski and McCulloch disclose all the limitations of claim 20, and is analyzed as previously discussed with respect to that claim. Furthermore, Bradski discloses the immersive headset of claim 20, wherein rendering the immersive content is further based on the set of feature data. (¶¶ [0320]-[0323]). Regarding claim 22, Bradski and McCulloch disclose all the limitations of claim 18, and is analyzed as previously discussed with respect to that claim. Furthermore, Bradski discloses the immersive headset of claim 18, wherein the secondary system of cameras comprises two cameras, each of which corresponds to a singular eye of a wearer of the immersive headset (¶¶ [0320]-[0323]). Regarding claim 23, Bradski and McCulloch disclose all the limitations of claim 18, and is analyzed as previously discussed with respect to that claim. Furthermore, Bradski discloses the immersive headset of claim 18, wherein the secondary system of cameras is located in a central area of the immersive headset, relative to the first plurality of cameras and the second plurality of cameras (¶¶ [0320]-[0323]). Regarding claim 27, Bradski and McCulloch disclose all the limitations of claim 1, and is analyzed as previously discussed with respect to that claim. Bradski does not explicitly disclose the immersive headset of claim 1, wherein the set of depth information is derived by observing parallax in at least one of the first set of image data or the second set of image data. However, McCulloch from the same or similar endeavor of image system discloses the immersive headset of claim 1, wherein the set of depth information is derived by observing parallax in at least one of the first set of image data or the second set of image data. (¶ [0049]). The motivation for combining Bradski and McCulloch has been discussed in connection with claim 1, above. Regarding claim 28, Bradski and McCulloch disclose all the limitations of claim 1, and is analyzed as previously discussed with respect to that claim. Bradski does not explicitly disclose the immersive headset of claim 1, wherein at least one of the first set of image data or the second set of image data comprises a depth map. However, McCulloch from the same or similar endeavor of image system discloses the immersive headset of claim 1, wherein at least one of the first set of image data or the second set of image data comprises a depth map (¶ [0048]). The motivation for combining Bradski and McCulloch has been discussed in connection with claim 1, above. Claims 3, 7 and 29 are rejected under 35 U.S.C. 103 as being unpatentable over Bradski, in view of McCulloch, and further, in view of Pomerantz (US20140160250A1), hereinafter referred to as Pomerantz. Regarding claim 3, Bradski and McCulloch disclose all the limitations of claim 1, and is analyzed as previously discussed with respect to that claim. Furthermore, Bradski discloses the immersive headset of claim 1, wherein the at least one processor is further configured to execute the image processing instructions to: modify brightness of the immersive content on the display (¶¶ [0098] and [0213]). Bradski does not explicitly disclose the in response to observed changes in a composite set of image data comprising the first set of image data and the second set of image data. However, Pomerantz from the same or similar endeavor of image system discloses the in response to observed changes in a composite set of image data comprising the first set of image data and the second set of image data (¶¶ [0252] and [0744] - [0746]). It would have been obvious to the person of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings disclosed by Bradski and McCulloch to add the teachings of Pomerantz as above, in order to determine whether the video data may include a hand gesture instruction by comparing a time-varying pattern of brightness of the video data to a pattern of brightness that is characteristic of at least a portion of a user's hand being passed across a field of view of the camera, and processing the hand gesture instruction in response to detecting the hand gesture instruction. (Pomerantz, [0398]). Regarding claim 7, Bradski and McCulloch disclose all the limitations of claim 1, and is analyzed as previously discussed with respect to that claim. Bradski does not explicitly disclose the immersive headset of claim 1, wherein the primary system of cameras corresponds to a set of different resolutions. However, Pomerantz from the same or similar endeavor of image system discloses the immersive headset of claim 1, wherein the primary system of cameras corresponds to a set of different resolutions (¶[0188]). The motivation for combining Bradski, McCulloch and Pomerantz has been discussed in connection with claim 3, above. Regarding claim 29, Bradski and McCulloch disclose all the limitations of claim 1, and is analyzed as previously discussed with respect to that claim. Bradski does not explicitly disclose the immersive headset of claim 1, wherein the first plurality of cameras is aligned along a first vertical axis within the right area of the immersive headset; and wherein the second plurality of cameras is aligned along a second vertical axis within the left area of the immersive headset. However, Pomerantz from the same or similar endeavor of image system discloses the immersive headset of claim 1, wherein the first plurality of cameras is aligned along a first vertical axis within the right area of the immersive headset; and wherein the second plurality of cameras is aligned along a second vertical axis within the left area of the immersive headset (¶[0182], [0183] and [0245]-[0246]). The motivation for combining Bradski, McCulloch and Pomerantz has been discussed in connection with claim 3, above. Claims 4, 14, and 24-26 are rejected under 35 U.S.C. 103 as being unpatentable over Bradski, in view of McCulloch, and further, in view of Sutherland (US 20160025982 A1), hereinafter referred to as Sutherland. Regarding claim 4, Bradski and McCulloch disclose all the limitations of claim 1, and is analyzed as previously discussed with respect to that claim. Bradski does not explicitly disclose the immersive headset of claim 1, wherein the at least one processor is further configured to execute the image processing instructions to utilize the set of depth information to perform pose estimation for the immersive headset. However, Sutherland from the same or similar endeavor of image system discloses the immersive headset of claim 1, wherein the at least one processor is further configured to execute the image processing instructions to utilize the set of depth information to perform pose estimation for the immersive headset (¶¶ [0026] and [0028]) It would have been obvious to the person of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings disclosed by Bradski and McCulloch to add the teachings of Sutherland as above, in order to provide a dense surface prediction to which the depth map is aligned (Sutherland, [0028]). Regarding claim 14, Bradski, McCulloch and Sutherland disclose all the limitations of claim 1, and is analyzed as previously discussed with respect to that claim. Bradski does not explicitly disclose the immersive headset of claim 1, wherein a reference camera and the display are part of a removable component that is mounted within the immersive headset. However, Sutherland from the same or similar endeavor of image system discloses the immersive headset of claim 1, wherein a reference camera and the display are part of a removable component that is mounted within the immersive headset (¶¶ [0051]-[0053]). The motivation for combining Bradski, McCulloch and Sutherland has been discussed in connection with claim 4, above. Regarding claim 24, Bradski, McCulloch and Sutherland disclose all the limitations of claim 17, and is analyzed as previously discussed with respect to that claim. Bradski does not explicitly disclose the immersive headset of claim 17, wherein the at least one additional sensor comprises a video camera; and the immersive content comprises background video obtained by the video camera. However, Sutherland from the same or similar endeavor of image system discloses the immersive headset of claim 17, wherein the at least one additional sensor comprises a video camera; and the immersive content comprises background video obtained by the video camera. (¶[0041]). The motivation for combining Bradski, McCulloch and Sutherland has been discussed in connection with claim 4, above. Regarding claim 25, Bradski, McCulloch and Sutherland disclose all the limitations of claim 17, and is analyzed as previously discussed with respect to that claim. Bradski does not explicitly disclose the immersive headset of claim 1, wherein deriving the set of depth information further comprises: identifying near-field and far-field portions of the field of view region; and generating depth maps for each of the near-field and far-field portions of the field of view region. However, Sutherland from the same or similar endeavor of image system discloses the immersive headset of claim 1, wherein deriving the set of depth information further comprises: identifying near-field and far-field portions of the field of view region; and generating depth maps for each of the near-field and far-field portions of the field of view region (¶¶[0029]-[0032]). The motivation for combining Bradski, McCulloch and Sutherland has been discussed in connection with claim 4, above. Regarding claim 26, Bradski, McCulloch and Sutherland disclose all the limitations of claim 17, and is analyzed as previously discussed with respect to that claim. Bradski does not explicitly disclose the immersive headset of claim 25, wherein each of the near-field and far-field portions of the field of view region are identified relative to a threshold distance. However, Sutherland from the same or similar endeavor of image system discloses the immersive headset of claim 25, wherein each of the near-field and far-field portions of the field of view region are identified relative to a threshold distance (¶¶[0070] and [0071]). The motivation for combining Bradski, McCulloch and Sutherland has been discussed in connection with claim 4, above. Claim 16 is rejected under 35 U.S.C. 103 as being unpatentable over Bradski, in view of McCulloch, and further, in view of Ackerman (US 20140375680 A1), hereinafter referred to as Ackerman. Regarding claim 16, Bradski and McCulloch disclose all the limitations of claim 1, and is analyzed as previously discussed with respect to that claim. Bradski does not explicitly disclose the immersive headset of claim 1, wherein the individual virtual object comprises an indicator identifying a real world object. However, Ackerman from the same or similar endeavor of image system discloses the immersive headset of claim 1, wherein the individual virtual object comprises an indicator identifying a real world object (¶ [0034]). It would have been obvious to the person of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings disclosed by Bradski and McCulloch to add the teachings of Ackerman as above, in order to include a text description to the virtual object associated with a real-world object (Ackerman, [0034]). Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to FABIO S LIMA whose telephone number is (571)270-0625. The examiner can normally be reached on Monday - Friday 8 am - 4 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jamie Atala can be reached on (571) 272-7384. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /FABIO S LIMA/Primary Examiner, Art Unit 2486
Read full office action

Prosecution Timeline

Jul 15, 2024
Application Filed
Jul 23, 2025
Non-Final Rejection — §103
Dec 23, 2025
Response Filed
Mar 26, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604015
METHOD, APPARATUS, AND MEDIUM FOR VIDEO PROCESSING
2y 5m to grant Granted Apr 14, 2026
Patent 12593038
TEMPORAL PREDICTION OF PARAMETERS IN NON-LINEAR ADAPTIVE LOOP FILTER
2y 5m to grant Granted Mar 31, 2026
Patent 12593045
ENTROPY CODING-BASED FEATURE ENCODING/DECODING METHOD AND DEVICE, RECORDING MEDIUM HAVING BITSTREAM STORED THEREIN, AND METHOD FOR TRANSMITTING BITSTREAM
2y 5m to grant Granted Mar 31, 2026
Patent 12581099
INFORMATION PROCESSING DEVICE AND METHOD
2y 5m to grant Granted Mar 17, 2026
Patent 12581094
IMAGE SIGNAL ENCODING/DECODING METHOD AND DEVICE THEREFOR
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
77%
Grant Probability
92%
With Interview (+14.8%)
2y 1m
Median Time to Grant
Moderate
PTA Risk
Based on 415 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month