Prosecution Insights
Last updated: April 19, 2026
Application No. 19/029,976

MULTI-CAMERA 3D CONTENT CREATION

Non-Final OA §103§DP
Filed
Jan 17, 2025
Examiner
RAHAMAN, SHAHAN UR
Art Unit
2426
Tech Center
2400 — Computer Networks
Assignee
Outward Inc.
OA Round
1 (Non-Final)
76%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
88%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
479 granted / 633 resolved
+17.7% vs TC avg
Moderate +13% lift
Without
With
+12.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
51 currently pending
Career history
684
Total Applications
across all art units

Statute-Specific Performance

§101
4.7%
-35.3% vs TC avg
§103
50.0%
+10.0% vs TC avg
§102
14.7%
-25.3% vs TC avg
§112
15.1%
-24.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 633 resolved cases

Office Action

§103 §DP
Notice of Pre-AIA or AIA status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Following prior arts found to be relevant to applicant’s invention during current search US20110074926 A1 (hereinafter Khan) US 20140294361 A1 (hereinafter Acharya) US 20010043738 A1 (Sawhney) US 20030095711 A1 (Fig.1, 6: multi-camera pose estimation with foreknowledge) US 20080262718 A1 (Fig.4, 5 multi-camera pose estimation with foreknowledge) US 20100045701 A1 (Scott, para 49, describe camera based and sensor-based pose estimation are combined for higher accuracy: also see para 46-47, 5, 22-24) Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-5, 7-20 are rejected under 35 U.S.C. 103 as being unpatentable over Khan in view of Acharya. Regarding Claim 1. Khan teaches a method, comprising: receiving data from a plurality of cameras configured to capture a scene:[(Fig.1 para 36)] for each of the plurality of cameras, determining a relative pose of a given camera with respect to the scene for a frame based at least in part on sensor data associated with that camera for the frame [(para 36, “capturing devices can be equipped with one or more of a Global Positioning System (GPS) receiver, a gyroscope, accelerometer, a compass, etc., to obtain the location coordinates (latitude, longitude and altitude) and orientation information of the video capturing device. Moreover, the video capture device can determine the distance to the object being photographed with a rangefinder or from the camera zoom/focus information”. In para 47 “each frame may have different orientation, time, and location information. In these situations, adding metadata to individual frames can result in a more accurate measurement of the associated information.”; Fig.11A )] determining relative poses of cameras with respect to one or more other cameras comprising the plurality of cameras for the frame based on independently determined relative poses of individual cameras with respect to the scene for the frame:[(Fig.11B)] and generating at least a partial three-dimensional reconstruction of the scene for the frame based on received image data and determined relative camera poses for the frame. [(para 11)] Khan does not explicitly show that the pose estimate is based on received image data However, in the same/related field of endeavor, Acharya teaches the pose estimate is based on received image data in addition to sensor data [(para 27, 66)] Therefore, in light of above discussion it would have been obvious to one of the ordinary skill in the art, before the effective filing date of the claimed invention, to combine the teaching of the prior arts because such combination would enhance the pose estimation [(Acharya para 27, 66)] Khan additionally teaches, with regards to claim 2. The method of claim 1, wherein the plurality of cameras is independently operated [(para 37)] Khan additionally teaches, with regards to claim 3. The method of claim 1, wherein no foreknowledge exists of relative poses of cameras with respect to the scene and with respect to each other. [(the distance between is camera is determined from the obtained information {para 81, 37}, i.e. the distance and other pose was not known beforehand; the cameras are freely move anywhere and locations are based on freely roamed positions/pose para 53-63: and relative pose is determined {para 71} )] Khan additionally teaches, with regards to claim 4. The method of claim 1, wherein relative poses of cameras with respect to the scene and with respect to each other are not fixed and are time variant. [(para 47, para 53-63)]: Khan additionally teaches, with regards to claim 5. The method of claim 1, wherein generating at least the partial three-dimensional reconstruction of the scene for the frame comprises determining correspondences between images comprising the frame captured by the plurality of cameras [(para 73-74, 65)] . Khan additionally teaches, with regards to claim 7. The method of claim 1, wherein generating at least the partial three-dimensional reconstruction of the scene for the frame comprises rectifying images comprising the frame captured by the plurality of cameras. [(para 44 compress decompress)] Khan additionally teaches, with regards to claim 8. The method of claim 1, wherein generating at least the partial three-dimensional reconstruction of the scene for the frame comprises estimating depths in images comprising the frame captured by the plurality of cameras. [(para 69)] Khan additionally teaches, with regards to claim 9. The method of claim 1, wherein generating at least the partial three-dimensional reconstruction of the scene for the frame comprises generating at least a corresponding portion of a point cloud for the frame.[[(para 59)] Acharya additionally teaches, with regards to claim 10. The method of claim 1, wherein determining relative pose of a given camera with respect to the scene for the frame comprises determining a first estimate of camera pose with respect to the scene based on image data received from the given camera and a second estimate of camera pose with respect to the scene based on sensor data received from the given camera. [(Acharya para 27, 66)] Acharya additionally teaches, with regards to claim 11. The method of claim 10, wherein the second estimate is employed to verify and provide a parallel estimate to the first estimate. [(Acharya from the image data line-of-sight/pose is determined using SfM {para 5-6}: “gyroscope-derived rotational matrix may be combined with that of an SfM-estimated camera pose.” “due to inherent noise in the gyroscope sensor, SfM alignment may be periodically rerun, resetting this noise, and maintaining a bounded inaccuracy (relative to the last frame-to-model alignment). Moreover, since SfM alignment itself is prone to some error, this input from gyroscope can be used to inform a hysteresis across multiple alignment attempts” {para 66}:)] Acharya additionally teaches, with regards to claim 12. The method of claim 10, wherein the second estimate is employed to fill gaps in pose estimation when pose cannot be determined from the first estimate. [(Acharya para 68-69)] Acharya additionally teaches, with regards to claim 13. The method of claim 1, wherein determined relative pose of a given camera with respect to the scene is with respect to features or fiducials of the scene. [(Acharya para 47, 62)] Khan additionally teaches, with regards to claim 14. The method of claim 1, wherein data is received, relative poses are determined, and at least the partial three-dimensional reconstruction of the scene is generated for each of a plurality of times slices or frames.[[(Fig.9)] Khan additionally teaches, with regards to claim 15. The method of claim 1, wherein received image data comprises frames of a video recording of the scene. [(para 13)] Khan in view of Acharya additionally teaches, with regards to claim 16. The method of claim 1, wherein generating at least the partial three-dimensional reconstruction of the scene for the frame is based on synchronizing received data from the plurality of cameras that have captured different perspectives of the scene. [(Khan Fig.9; Acharya para 42)] Acharya additionally teaches, with regards to claim 17. The method of claim 1, wherein generating at least the partial three-dimensional reconstruction of the scene for the frame is based on correspondence between sets of features seen in common among multiple cameras. [(Acharya para 61-62)] Acharya additionally teaches, with regards to claim 18. The method of claim 1, wherein generating at least the partial three-dimensional reconstruction of the scene for the frame is based on establishing correspondence of features between frames of a video sequence. [(Acharya para 61-62)] Khan in view of Acharya additionally teaches, with regards to claim 19. A system, comprising: a processor configured to: receive data from a plurality of cameras configured to capture a scene: for each of the plurality of cameras, determine a relative pose of a given camera with respect to the scene for a frame based at least in part on image data captured by that camera and sensor data associated with that camera for the frame: determine relative poses of cameras with respect to one or more other cameras comprising the plurality of cameras for the frame based on independently determined relative poses of individual cameras with respect to the scene for the frame: and generate at least a partial three-dimensional reconstruction of the scene for the frame based on received image data and determined relative camera poses for the frame: and a memory coupled to the processor and configured to provide instructions to the processor. [(see analysis of claim 1 and Khan para 42 Acharya para 104-105)] Khan in view of Acharya additionally teaches, with regards to claim 20. A computer program product embodied in a non-transitory computer readable medium and comprising computer instructions for: receiving data from a plurality of cameras configured to capture a scene: for each of the plurality of cameras, determining a relative pose of a given camera with respect to the scene for a frame based at least in part on image data captured by that camera and sensor data associated with that camera for the frame: determining relative poses of cameras with respect to one or more other cameras comprising the plurality of cameras for the frame based on independently determined relative poses of individual cameras with respect to the scene for the frame: and generating at least a partial three-dimensional reconstruction of the scene for the frame based on received image data and determined relative camera poses for the frame. [(see analysis of claim 1 and Khan para 42 Acharya para 104-105)] Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Khan in view of Acharya in view of Sawhney. Regarding Claim 6. Khan in view of Acharya does not explicitly show wherein generating at least the partial three-dimensional reconstruction of the scene for the frame comprises facilitating registration of images comprising the frame captured by the plurality of cameras by feature correspondence between nearest neighbor cameras. However, in the same/related field of endeavor, Sawhney teaches wherein generating at least the partial three-dimensional reconstruction of the scene for the frame comprises facilitating registration of images comprising the frame captured by the plurality of cameras by feature correspondence between nearest neighbor cameras. [(para 8, 44 and 32)] Therefore, in light of above discussion it would have been obvious to one of the ordinary skill in the art, before the effective filing date of the claimed invention, to combine the teaching of the prior arts to improve pose estimation Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the claims at issue are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on a nonstatutory double patenting ground provided the reference application or patent either is shown to be commonly owned with this application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The USPTO internet Web site contains terminal disclaimer forms which may be used. Please visit http://www.uspto.gov/forms/. The filing date of the application will determine what form should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to http://www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp. Claims 1-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-20 of U.S. Patent No. 12368833. Although the claims at issue are not identical, they are not patentably distinct from each other. Instant claim 1 is obvious over limitations of patented claims 1, 2 and 15. Other instant claims are obvious permutations and variations of the patented claims 1-20. Claims 1-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-18 of U.S. Patent No. US 11212510 . Although the claims at issue are not identical, they are not patentably distinct from each other. Instant claim 1 subset of patented claim 1 with obvious variation (method vs system). Other instant claims are obvious permutations and variations of the patented claims 1-18. Claims 1-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-24 of U.S. Patent No. US 10791319 . Although the claims at issue are not identical, they are not patentably distinct from each other. Instant claim 1 subset of patented claim 1 with obvious variation (method vs system) . Other instant claims are obvious permutations and variations of the patented claims 1-24. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Shahan Rahaman whose telephone number is (571)270-1438. The examiner can normally be reached on 7am - 3:30pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Nasser Goodarzi can be reached at telephone number (571) 272-4195. The fax phone number for the organization where this application or proceeding is assigned is (571) 273-8300. Information regarding the status of an application may be obtained from Patent Center. Status information for published applications may be obtained from Patent Center. Status information for unpublished applications is available through Patent Center for authorized users only. Should you have questions about access to Patent Center, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/uspto-automated- interview-request-air-form. /SHAHAN UR RAHAMAN/Primary Examiner, Art Unit 2426
Read full office action

Prosecution Timeline

Jan 17, 2025
Application Filed
Apr 01, 2026
Non-Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12599294
IMAGE-RECORDING DEVICE FOR IMPROVED LOW LIGHT INTENSITY IMAGING AND ASSOCIATED IMAGE-RECORDING METHOD
2y 5m to grant Granted Apr 14, 2026
Patent 12602765
DEFECT INSPECTION SYSTEM AND DEFECT INSPECTION METHOD
2y 5m to grant Granted Apr 14, 2026
Patent 12598328
VIDEO SIGNAL PROCESSING METHOD AND DEVICE
2y 5m to grant Granted Apr 07, 2026
Patent 12593035
IMAGE ENCODING/DECODING METHOD AND DEVICE
2y 5m to grant Granted Mar 31, 2026
Patent 12586224
THREE-DIMENSIONAL SCANNING SYSTEM AND METHOD FOR OPERATING SAME
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
76%
Grant Probability
88%
With Interview (+12.6%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 633 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month