Prosecution Insights
Last updated: April 19, 2026
Application No. 18/306,682

SYSTEMS AND METHODS FOR DETERMINING MOTION MODELS FOR ALIGNING SCENE CONTENT CAPTURED BY DIFFERENT IMAGE SENSORS

Non-Final OA §103
Filed
Apr 25, 2023
Examiner
GLOVER, CHRISTOPHER KINGSBURY
Art Unit
2485
Tech Center
2400 — Computer Networks
Assignee
Microsoft Technology Licensing, LLC
OA Round
5 (Non-Final)
56%
Grant Probability
Moderate
5-6
OA Rounds
2y 2m
To Grant
85%
With Interview

Examiner Intelligence

Grants 56% of resolved cases
56%
Career Allow Rate
100 granted / 177 resolved
-1.5% vs TC avg
Strong +28% interview lift
Without
With
+28.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 2m
Avg Prosecution
15 currently pending
Career history
192
Total Applications
across all art units

Statute-Specific Performance

§101
2.2%
-37.8% vs TC avg
§103
55.3%
+15.3% vs TC avg
§102
17.7%
-22.3% vs TC avg
§112
22.1%
-17.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 177 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 1/7/2026 has been entered. Response to Amendment The instant Amendment with arguments of 1/7/2026 has been considered, but the outstanding rejection based on Bleyer is maintained because, as mentioned previously, the amendment recites disparate features of inventive aspects, but fails to fully delineate the invention to provide differentiation over the cited Bleyer reference. Namely, as amended, claim 1 requires two motion models for a same time derived with different feature sets. In the instant Amendment at page 9, it is argued that Bleyer updates motion models over time to instantiate multiple motion models at times. However, Bleyer also discloses the motion model is a composite of motion mapping matrices, (paragraph 0134) each of which may also be considered a motion model, and different data is used to develop these different matrices, such that Bleyer at least teaches two motion models for a same time derived with different feature sets as recited. Therefore the outstanding rejections are maintained. See claims mapping below. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 2, 12 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Bleyer (US 2022/0028095) in view of Konolige (US 2023/0090275). Regarding claim 1, Bleyer discloses a system for determining motion models for aligning scene content captured by different image sensors, (Abstract, system of camera alignment via motion models) the system comprising: one or more processors; (paragraphs 0215/0216, system has processors) and one or more hardware storage devices that store instructions that are executable by the one or more processors to configure the system to: (paragraphs 0223/0224, storage coupled to processors and storing instructions for execution) access a first motion model, (paragraph 0057, alignment matrix between cameras is a motion model) the first motion model being generated in association with a timepoint (paragraph 0011, calibration performed at a timepoint) based upon a set of feature correspondences, (paragraphs 0107/0108, motion model including alignment generated by feature correspondence mapping) the set of feature correspondences comprising an inlier set and an outlier set, (paragraph 0172, the set of matching features is used to develop a further set of unprojected matching features, hence an inlier set and an outlier set derived from the inlier set) wherein the inlier set is used to determine model parameters for the first motion model, (paragraph 0108, overall features matches used to determine the first motion model) wherein the set of feature correspondences is determined by performing descriptor matching on features extracted from (i) a first image associated with the timepoint and captured by a first image sensor and (ii) a second image associated with the timepoint and captured by a second image sensor, (shown Figure 4, paragraph 0105, feature matching between first and second images-per paragraph 0011, will be at the calibration timepoint) ... wherein the inlier set and the outlier set comprise different subsets of feature correspondences from the set of feature correspondences such that no feature correspondence from the set of feature correspondences is included in both the inlier set and the outlier set; (paragraph 0172, two different feature sets, projected and unprojected, and each may be exclusively mapped to one of the inlier/outlier sets-see Figure 5, paragraph 0115) define a modified set of feature correspondences, the modified set of feature correspondences comprising the outlier set from the set of feature correspondences; (paragraph 0172, the set of matching features is used to develop a further set of unprojected matching features, hence an inlier set and an outlier set derived from the inlier set) and generate a second motion model for the timepoint (paragraph 0134, multiple motion models compiled for same timepoint) by using the modified set of feature correspondences to determine model parameters for the second motion model. (paragraph 0172, second motion model derived from determining respective correspondences between unprojected subset of features) While Bleyer discloses both cameras capturing an object within a time frame, (paragraphs 0184/0190) Bleyer fails to identically disclose wherein the first image and the second image are captured in a temporally synchronized manner in association with the timepoint. However, Konolige teaches wherein the first image and the second image are captured in a temporally synchronized manner in association with the timepoint. (paragraph 0033, imagers for stereo imaging actuated at same time synchronously to capture object) It would have been obvious to one of ordinary skill in the art before the effective filing date of the instant application to initially capture images of an object at the same time for disparity calculation using stereo images, because such was a common technique for stereo depth sensing by disparity interpolation before the effective filing date as would be understood by one of skill in the art. (Konolige, paragraph 0002) Regarding claim 2, Bleyer discloses wherein the first motion model and the second motion model comprise 3D rotation models. (paragraph 0056, alignment matrix is a 3D rotational matrix) Regarding claim 12, Bleyer discloses a system for determining motion models for aligning scene content captured by different image sensors, (Abstract, system of camera alignment via motion models) the system comprising: one or more processors; (paragraphs 0215/0216, system has processors) and one or more hardware storage devices that store instructions that are executable by the one or more processors to configure the system to: (paragraphs 0223/0224, storage coupled to processors and storing instructions for execution) obtain a first image associated with a timepoint (paragraph 0011, camera match timepoint) using a first image sensor; (paragraph 0057, image obtained from main camera) determine a first set of features by performing feature extraction on the first image; (shown Figure 4, paragraphs 0104/0105, feature extraction on images) obtain a second image associated with the timepoint (paragraph 0011, matching/matrices derived for same timepoint) using a second image sensor... ; (paragraph 0057, image obtained from second camera) determine a second set of features by performing feature extraction on the second image; (shown Figure 4, paragraphs 0104/0105, feature extraction on images) determine a set of feature correspondences by performing descriptor matching on the first set of features and the second set of features; (paragraphs 0106/0107, corresponding feature match based on descriptors) generate a first motion model for the timepoint (paragraph 0011, motion matrices calculated for an initial timepoint) by determining an inlier set from the set of feature correspondences and using the inlier set to determine model parameters for the first motion model, (paragraphs 0107/0108, motion model including alignment generated by feature correspondence mapping of the determined features) wherein the inlier set comprises a first subset of feature correspondences from the set of feature correspondences; (paragraph 0172, first set of feature correspondences) and generate a second motion model for the timepoint (paragraph 0134, at least a second motion matrix generated for the timepoint) by determining an outlier set from the set of feature correspondences and using the outlier set to determine model parameters for the second motion model, (paragraph 0172, second motion model derived from determining respective correspondences between unprojected subset of features) wherein the outlier set comprises a second subset of feature correspondences from the set of feature correspondences that is different from the inlier set such that no feature correspondence from the set of feature correspondences is included in both (i) the inlier set used to determine model parameters for the first motion model and (ii) the outlier set used to determine model parameters for the second motion mode. (paragraph 0172, two different feature sets, projected and unprojected, used to generate respective motion matrices, and each may be exclusively mapped to one of the inlier/outlier sets-see Figure 5, paragraph 0115) While Bleyer discloses both cameras capturing an object within a time frame, (paragraphs 0184/0190) Bleyer fails to identically disclose obtaining the second image in a temporally synchronized manner with the obtaining of the first image such that the first image and the second image correspond to the timepoint. However, Konolige teaches obtaining the second image in a temporally synchronized manner with the obtaining of the first image such that the first image and the second image correspond to the timepoint. (paragraph 0033, imagers for stereo imaging actuated at same time synchronously to capture object) Same rationale for combining and motivation as for claim 1 above. Regarding claim 13, Bleyer discloses wherein the first image sensor is mounted on a head-mounted display (HMD), (paragraph 011, main camera in HMD) and wherein the second image sensor is mounted in a user instrument for use in conjunction with the HMD. (paragraph 0006, second camera in user instrument) Claim(s) 3-4, 7-11, 14 and 16-20 are rejected under 35 U.S.C. 103 as being unpatentable over Bleyer in view of Konolige, in yet further view of Kobayashi (US 2022/0148198). Regarding claim 3, Bleyer discloses access a preceding motion model, the preceding motion model being generated based upon a set of preceding feature correspondences that temporally precedes the set of feature correspondences; (paragraphs 0102/0145, motion model developed based on feature correspondences between image pairs, and continuously updated over time with updated image pairs) generate an aligned preceding motion model by modifying the preceding motion model using inertial tracking data; (paragraphs 0163/0179 in conjunction with paragraph 0125, motion models are updated over time by comparison with previous motion models, and constrained by inertial tracking data) and select a final motion model from among the first motion model and the second motion model. (paragraph 0142, two motion models are determined and selected between based on cost function) While Bleyer discloses using a Wahba cost function relative to two alignment matrices, (paragraph 0172) Bleyer may be regard as inchoate with regard to selecting a motion model based upon (i) a comparison between the aligned preceding motion model and the first motion model and (ii) a comparison between the aligned preceding motion model and the second motion model. (paragraphs 0072-0074, multiple motion vectors calculated by different features of blocks in an image are compared to determine the reliable motion vector, per Bleyer, this may be a preceding vector) However, Kobayashi teaches selecting a motion model based upon (i) a comparison between the aligned preceding motion model and the first motion model and (ii) a comparison between the aligned preceding motion model and the second motion model. (paragraphs 0072-0074, multiple motion vectors calculated by different features of blocks in an image are compared to determine the reliable motion vector, per Bleyer, this may be a preceding vector) It would have been obvious to one of ordinary skill in the art that multiple motion vectors may be determined and a most accurate or reliable motion vector is chosen from among a set of motion vectors associated with different features before the effective filing date of the instant application because Bleyer teaches forming at least two motion vector models and selecting between the same based on a cost function, (paragraph 0172) and Kobayashi explicitly teaches developing a set of motion vectors from distinct features and selecting a most reliable motion vector based on a comparison with other motion vectors, (paragraphs 0072-0074) thereby making clear to one of skill in the art that a set of motion vectors derived from different features are assessed to determine a best motion vector, and this was well known before the effective filing date as evinced by Kobayashi. Regarding claim 4, Bleyer discloses wherein the comparison between the aligned preceding motion model and the first motion model comprises a comparison between look vectors of the aligned preceding motion model and the first motion model, or wherein the comparison between the aligned preceding motion model and the second motion model comprises a comparison between look vectors of the aligned preceding motion model and the second motion model. (paragraph 0125/0126, preceding motion model or preceding motion model controlled by inertial tracking and compared with motion model to update motion model) Regarding claim 7, Bleyer discloses wherein the first image sensor is mounted on a head-mounted display (HMD), (paragraph 011, main camera in HMD) and wherein the second image sensor is mounted in a user instrument for use in conjunction with the HMD. (paragraph 0006, second camera in user instrument) Regarding claim 8, Bleyer discloses utilize the final motion model to generate an output image for display to a user. (paragraphs 0007/0011, alignment model used to combine image for display to user) Regarding claim 9, Bleyer discloses wherein the output image comprises an overlay of the first image and the second image. (paragraphs 0004/0005, first and second images overlaid and combined by alignment matrix to output image) Regarding claim 10, Bleyer discloses utilize the final motion model as a preceding motion model to facilitate selection of a subsequent final motion model from among a subsequently generated pair of motion models based upon a subsequently acquired set of feature correspondences. (paragraph 0137 in conjunction with paragraph 0142, motion models updated over time such that the current motion model becomes the preceding motion model used to constrain selection or parameters of current motion model, may be two model from different sets of features, and selected based on cost function) Regarding claim 11, Bleyer fails to disclose the recited; however, Kobayashi teaches wherein the first motion model or the second motion model is generated utilizing random sample consensus (RANSAC). (paragraph 0006, RANSAC algorithm used to select motion vector) It would have been obvious to one of ordinary skill in the art before the effective filing date to use an RANSAC algorithm as recited to select among motion vectors for the best fit because RANSAC algorithms were commonly used and well known by those of skill in the art as recited before the effective filing date as evinced by the background of Kobayashi. (paragraph 0006) Dependent claims 14, 16 and 17 are system claims reciting features similar to claims 3, 8 and 9, respectively, which are also disclosed by Bleyer for similar reasons. Regarding claim 18, Bleyer discloses a system for determining motion models for aligning scene content captured by different image sensors, (Abstract, system of camera alignment via motion models) the system comprising: one or more processors; (paragraphs 0215/0216, system has processors) and one or more hardware storage devices that store instructions that are executable by the one or more processors to configure the system to: (paragraphs 0223/0224, storage coupled to processors and storing instructions for execution) generate a plurality of motion models for a timepoint, (paragraph 0011, reference matrix, match matrix and alignment matrix each compiled at time point and may be considered motion models) wherein each motion model of the plurality of motion models comprises respective model parameters determined using a different subset of feature correspondences from a set of feature correspondences, (paragraph 0172, first motion model at a time selected from full set of detected features, and second motion model derived from determining respective correspondences between unprojected subset of features) wherein the set of feature correspondences is determined by performing descriptor matching on features extracted from (i) a first image captured by a first image sensor and (ii) a second image captured by a second image sensor, (shown Figure 4, paragraph 0105, feature matching between first and second images) ... wherein the plurality of motion models comprises at least a first motion model determined using an inlier set and a second motion model determined using an outlier set, wherein the inlier set comprises a first subset of feature correspondences from the set of feature correspondences, and wherein the outlier set comprises a second subset of feature correspondences from the set of feature correspondences that is different from the inlier set, (paragraph 0172, two differing sets of feature correspondences) and wherein no feature correspondence from the set of feature correspondences is included in both the inlier set and the outlier set; (paragraph 0172, two different feature sets, projected and unprojected, and each may be exclusively mapped to one of the inlier/outlier sets-see Figure 5, paragraph 0115) and select a final motion model ...based upon a comparison ... of motion models to a preceding motion model. (paragraph 0172 in conjunction with paragraphs 0179/0181, previous alignment matrix used to determine current alignment matrix) While Bleyer discloses both cameras capturing an object within a time frame, (paragraphs 0184/0190) Bleyer fails to identically disclose wherein the first image and the second image are captured in a temporally synchronized manner corresponding to the timepoint. However, Konolige teaches wherein the first image and the second image are captured in a temporally synchronized manner corresponding to the timepoint. (paragraph 0033, imagers for stereo imaging actuated at same time synchronously to capture object) Same rationale for combining and motivation as per claim 1 above. Bleyer further fails to identically disclose [the] plurality of motion models and that each of the plurality of motion models is compared to the preceding motion model. However, Kobayashi teaches [the] plurality of motion models and that each of the plurality of motion models is compared as recited. (paragraphs 0072-0074, multiple motion vectors calculated by different features of blocks in an image are compared to determine the reliable motion vector) Same rationale for combining and motivation as for claim 3 above. Regarding claim 19, Bleyer discloses wherein the preceding motion model is temporally updated using inertial tracking data. (paragraph 0056, inertial tracking data from IMU used to update motion model, or keep in temporal line) Regarding dependent claim 20, claim 20 is a system claim reciting features similar to dependent claim 8, and is therefore disclosed by Bleyer for similar reasons. Claim(s) 6 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Bleyer in view of Konolige and Kobayashi, in yet further view of Bouhnik (US 2024/0324852). Regarding claim 6, Bleyer and Kobayashi fail to identically disclose the recited; however, Bouhnik teaches wherein the inlier set comprises feature correspondences associated with a first object positioned at a first depth within a scene represented in the first image and the second image, and wherein the outlier set comprises feature correspondences associated with a second object positioned at a second depth within the scene represented in the first image and the second image. (paragraph 0107, different fragment regions are at different depths and have different feature mappings, which provide different poses or trackings) It would have been obvious to one of ordinary skill in the art before the effective filing date of an instant application that different feature mappings of different regions of an image may be associated with different depths as recited because Bouhnik teaches that different regions may have different depths that are made apparent when feature mapping differing feature sets, (paragraph 0107) and such would be understood to one of skill in the art as inherent in real-world images. Regarding dependent claim 15, claim 15 is a system claim reciting features similar to claim 6, and is therefore taught by the application of Bouhnik to Bleyer, Konolige and Kobayashi for similar reasons. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Driscoll (US 2024/0378733) would identically disclose the recited, but for the subsequent filing date. Hehn (US 2024/0062412) provides for selecting among motion models based on past data/tracking. Du (US 2023/0215026) implicates generating motion vectors from features. Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHRISTOPHER KINGSBURY GLOVER whose telephone number is (303)297-4401. The examiner can normally be reached Monday-Friday 8-6 MT. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jay Patel can be reached at 571 272 2988. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CHRISTOPHER KINGSBURY GLOVER/ Examiner, Art Unit 2485 /JAYANTI K PATEL/ Supervisory Patent Examiner, Art Unit 2485 February 21, 2026
Read full office action

Prosecution Timeline

Apr 25, 2023
Application Filed
Jan 25, 2025
Non-Final Rejection — §103
Feb 07, 2025
Interview Requested
Feb 19, 2025
Interview Requested
Mar 05, 2025
Applicant Interview (Telephonic)
Mar 05, 2025
Examiner Interview Summary
Mar 13, 2025
Response Filed
May 22, 2025
Final Rejection — §103
Jul 25, 2025
Request for Continued Examination
Jul 29, 2025
Response after Non-Final Action
Aug 01, 2025
Non-Final Rejection — §103
Nov 05, 2025
Response Filed
Nov 28, 2025
Final Rejection — §103
Jan 07, 2026
Request for Continued Examination
Jan 25, 2026
Response after Non-Final Action
Feb 21, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598316
REUSE OF BLOCK TREE PATTERN IN VIDEO COMPRESSION
2y 5m to grant Granted Apr 07, 2026
Patent 12598336
A/V TRANSMISSION DEVICE AND A/V RECEPTION DEVICE
2y 5m to grant Granted Apr 07, 2026
Patent 12586453
System and Method for Monitoring Life Signs of a Person
2y 5m to grant Granted Mar 24, 2026
Patent 12556672
VIDEO PROCESSING APPARATUS FOR DESIGNATING AN OBJECT ON A PREDETERMINED VIDEO AND CONTROL METHOD OF THE SAME, AND STORAGE MEDIUM
2y 5m to grant Granted Feb 17, 2026
Patent 12556725
ADAPTIVE RESOLUTION CODING FOR VIDEO CODING
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
56%
Grant Probability
85%
With Interview (+28.3%)
2y 2m
Median Time to Grant
High
PTA Risk
Based on 177 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month