Prosecution Insights
Last updated: April 19, 2026
Application No. 17/502,594

VIEWPOINT PATH STABILIZATION

Non-Final OA §102§103
Filed
Oct 15, 2021
Examiner
WERNER, DAVID N
Art Unit
2487
Tech Center
2400 — Computer Networks
Assignee
Fyusion Inc.
OA Round
7 (Non-Final)
68%
Grant Probability
Favorable
7-8
OA Rounds
3y 3m
To Grant
84%
With Interview

Examiner Intelligence

Grants 68% — above average
68%
Career Allow Rate
483 granted / 713 resolved
+9.7% vs TC avg
Strong +16% interview lift
Without
With
+16.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
32 currently pending
Career history
745
Total Applications
across all art units

Statute-Specific Performance

§101
7.4%
-32.6% vs TC avg
§103
44.8%
+4.8% vs TC avg
§102
23.1%
-16.9% vs TC avg
§112
16.1%
-23.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 713 resolved cases

Office Action

§102 §103
DETAILED ACTION This Office action for U.S. Patent Application No. 17/502,594 is responsive to the Request for Continued Examination filed 14 January 2026, in reply to the Final Rejection of 13 November 2025. Claims 1–20 are pending. In the previous Office action, claims 1–7 were rejected under 35 U.S.C. § 102(a)(1) as anticipated by U.S. Patent Application Publication No. 2013/0129192 A1 (“Wang”). Claims 8–12, 19, and 20 were rejected under 35 U.S.C. § 103 as obvious over Wang in view of US 2020/0228774 A1 (“Kar”). Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 C.F.R. § 1.114 A request for continued examination under 37 C.F.R. § 1.114, including the fee set forth in 37 C.F.R. § 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 C.F.R. § 1.114, and the fee set forth in 37 C.F.R. § 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 C.F.R. § 1.114. Applicant's submission filed on 14 January 2026 has been entered. Response to Arguments Applicant's arguments filed with respect to claim 1 have been fully considered but they are not persuasive. Applicant first again1 claims, without support, that there is a patentable difference between the claimed smoothed trajectory and the Wang smoothed camera path. 14 January 2025 “REMARKS/ARGUMENTS” (“Rem.”) 8. The multiple refutations of this allegation earlier in prosecution are restated and incorporated by reference. Applicant next quotes Wang ¶ 0071 that recites “Any type of smoothing operation known in the art can be used to determined the smoothed camera path 430”. Applicant questions whether the “standard procedures” listed as the Wang preferred embodiments, such as fitting a smoothing spline, qualify as an optimization function having a loss function as claimed. D.S.G. Pollock, “Smoothing with Cubic Splines”, in Handbook of Time Series Analysis, Signal Processing, and Dynamics 293-322 (1999) is added to the record. This reference discusses a cubic spline as a smoothing curve that balances “the trade-off between smoothness of the curve and its closeness to the data points” (1) as the curve minimizes the squared norm from the data points (15–16). This is sufficient to show the Wang smoothing spline is sufficient to meet the claimed criteria. Claim Rejections - 35 U.S.C. § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. §§ 102 and 103 (or as subject to pre-AIA 35 U.S.C. §§ 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. § 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1–7 and 13–18 are rejected under 35 U.S.C. § 102(a)(1) as being anticipated by U.S. Patent Application Publication No. 2013/0129192 A1 (“Wang”). Wang, directed to converting 2-D video to 3-D video, teaches with respect to claim 1 a method comprising: projecting via a processor a plurality of three-dimensional points onto first locations in a first image of an object captured from a first position in three-dimensional space relative to the object (¶¶ 0045, 0058; extract 3-D point cloud of scene), the first position associated with a first actual camera position along a path of actual camera positions (Figs. 4–5, ¶¶ 0070–71; input camera path 420, input camera path graph 480) . . . ; projecting via a processor the plurality of three-dimensional points onto second locations a virtual camera position located at a second position at a second position in three-dimensional space relative to the object (¶ 0084, warping 530 image 530 to target viewpoint 520), the second position corresponding to a location along a smoothed trajectory for a plurality of virtual camera positions (Figs. 4–5, ¶¶ 0070–71, smoothed camera path 430, smoothed camera path graph 485), wherein a plurality of locations along the smoothed trajectory are determined by using rotational path modeling comprising determining an optimization function having a loss function (¶ 0069, directly measuring the camera position using position sensors along with a motion algorithm structure to estimate camera positions; ¶¶ 0070–71, smoothing camera path from input camera positions, e.g., using a cubic smoothing spline described as “well-known in the art”; see D.S.G. Pollock, “Smoothing with Cubic Splines”, in Handbook of Time Series Analysis, Signal Processing, and Dynamics 293-322 (1999) as demonstration that cubic smoothing spline minimizes curvature and norm from data points, as a valid optimized regression curve) and minimizing one more loss functions (¶ 0071, smoothing spline function “well-known in the art” is a curve known to minimize norm from data points); determining via a processor a first plurality of transformations, each of the first plurality of transformations linking a respective one of the first locations with a respective one of the second locations (¶ 0090, equation 1 of pixel position from image frame to virtual view); based on the plurality of transformations, determining via the processor a second plurality of transformations transforming first coordinates for the first image of the object to second coordinates for a second image of the object so that the second image corresponds to a view along the smoothed trajectory (¶ 0091, pixel correspondence function using the Equation 1 transformation relates the pixel coordinates in a video frame to a corresponding warped image with the target viewpoint); and generating via the processor the second image of the object from the virtual camera position to a second plurality of transformations (¶¶ 0084, 92; synthesizing the warped output image). Regarding claim 2, Wang teaches the method of claim 1, wherein the first coordinates correspond to a first two-dimensional mesh overlain on the first image of the object (id.), and wherein the second coordinates correspond to a two-dimensional mesh overlain on the second image of the object (id.). Regarding claim 3, Wang teaches the method of claim 1, wherein the first image of the object is one of a first plurality of images captured by a camera moving along an input path through space around the object (Fig. 5, input camera path 480), and wherein the second image is one of a second plurality of images generated at respective virtual camera positions relative to the object (id., smoothed camera path 485). Regarding claim 4, Wang teaches the method of claim 3, the method further comprising: determining a smoothed path through space around the object based on the input path (Fig. 5, camera path 485 is smoothed version of input camera path 480); and determining the virtual camera position based on the smoothed path (¶¶ 0071–76, modifying viewpoint of digital image to be at smoothed camera position). Regarding claim 5, Wang teaches the method of claim 1, wherein the plurality of three-dimensional points are determined at least in part via motion data captured from an inertial measurement unit at the mobile computing device (¶ 0056–57, camera location and position in space determined from external parameters from position sensor such as gyroscope, accelerometer, or GPS). Regarding claim 6, Wang teaches the method of claim 1, wherein the motion data includes data selected from the group consisting of: accelerometer data, gyroscopic data, and global positioning system (GPS) data (¶ 0057, “Types of position sensors used in digital cameras commonly include gyroscopes, accelerometers and global positioning system (GPS) sensors”). Regarding claim 7, Wang teaches the method of claim 1, wherein the plurality of three-dimensional points are determined at least in part based on depth sensor data captured from a depth sensor (¶ 0107, application to Microsoft Kinect imaging system that includes an RGB digital camera and a depth sensor). Regarding claim 13, Wang teaches the method of claim 1, wherein the processor is located within a mobile computing device that includes a camera, the first image being captured by the camera (¶¶ 0070–71, camera moves on path). Regarding claim 14, Wang teaches the method of claim 1, wherein the processor is connected to a camera, the first image being captured by the camera (¶¶ 0070–71, camera moves on path). Regarding claim 15, Wang teaches a non-transitory computer-readable storage medium . . . including instructions that when executed by a computer, cause the computer to (¶ 0120, implementation on non-transitory computer readable storage medium storing instructions for controlling a computer to practice the described method): [perform the claim 1 method] (claim 1 rejection supra). Regarding claim 16, Wang teaches a computing apparatus comprising: a processor (¶ 0120, computer that practices the described method); and a memory storing instructions that, when executed by the processor (id., storage medium storing instructions for controlling the computer to practice the method), configure the apparatus to: [perform the claim 1 method] (claim 1 rejection supra). Regarding claim 17, Wang teaches the computing apparatus of claim 16, wherein the first image of the object is one of a first plurality of images captured by a camera moving along an input path through space around the object (Fig. 5, input camera path 480), and wherein the second image is one of a second plurality of images generated at respective virtual camera positions relative to the object (id., smoothed camera path 485). Regarding claim 18, Wang teaches the computing apparatus of claim 17, wherein the instructions further configure the apparatus to: determine a smoothed path through space around the object based on the input path (Fig. 5, camera path 485 is smoothed version of input camera path 480); and determine the virtual camera position based on the smoothed path (¶¶ 0071–76, modifying viewpoint of digital image to be at smoothed camera position). Claim Rejections - 35 U.S.C. § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. §§ 102 and 103 (or as subject to pre-AIA 35 U.S.C. §§ 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. § 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention absent any evidence to the contrary. Applicant is advised of the obligation under 37 C.F.R. § 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. § 102(b)(2)(C) for any potential 35 U.S.C. § 102(a)(2) prior art against the later invention. Claims 8–12, 19, and 20 are rejected under 35 U.S.C. § 103 as being obvious over Wang in view of U.S. Patent Application Publication No. 2020/0228774 A1 (“Kar”). Applicant has provided evidence in this file showing that the claimed invention and the subject matter disclosed in the “Kar” prior art reference were owned by, or subject to an obligation of assignment to, the same entity Fyusion, Inc. not later than the effective filing date of the claimed invention, or the subject matter disclosed in the prior art reference was developed and the claimed invention was made by, or on behalf of one or more parties to a joint research agreement in effect not later than the effective filing date of the claimed invention. However, although Kar has been excepted as prior art under 35 U.S.C. § 102(a)(2), because it names additional inventors, it is still applicable as prior art under 35 U.S.C. § 102(a)(1) that cannot be excepted under 35 U.S.C. § 102(b)(2)(C). M.P.E.P. § 2153.01(a). Applicant may rely on the exception under 35 U.S.C. § 102(b)(1)(A) to overcome this rejection using a reference that qualifies as prior art under 35 U.S.C. § 102(a)(1) by a showing under 37 C.F.R. § 1.130(a) that the subject matter disclosed in the reference was obtained directly or indirectly from the inventor or a joint inventor of this application, and is therefore not prior art under 35 U.S.C. § 102(a)(1). M.P.E.P. §§ 2153.01, 2155.01, 717.01(a). Alternatively, applicant may rely on the exception under 35 U.S.C. § 102(b)(1)(B) by providing evidence of a prior public disclosure via an affidavit or declaration under 37 C.F.R. § 1.130(b). M.P.E.P. §§ 2153.02, 2155.02, 717.01(b). Claims 8–12 and 19–20 are directed to further details of the invention not taught by Wang. However, as will be shown below, Kar overcomes each of these deficiencies of Wang. Regarding claim 8, Wang in view of Kar teaches the method of claim 1, wherein the second plurality of transformations is generated via a neural network (Kar ¶ 0051, reconstructing image with novel view by using a convolutional neural network to blend images). It would have been obvious to one of ordinary skill in the art at the time of effective filing to perform the Wang method using a neural network, since it has been held that updating prior art systems to use modern electronics is considered within the ordinary skill in the art. Leapfrog Enterprises v. Fisher-Price, Inc., 485 F.3d 1157, 1161–62 (Fed. Cir. 2007). Regarding claim 9, Wang in view of Kar teaches the method of claim 8, wherein the first plurality of transformations are provided as reprojection constraints to the neural network (Kar ¶ 0174, points of interest are constrained in search in stabilization algorithm). Regarding claim 10, Wang in view of Kar teaches the method of claim 8, wherein the neural network includes one or more similarity constraints that penalize deformation of first two-dimensional mesh via the second plurality of transformations (Wang ¶ 0047, 0060–62, candidate images must score above predefined similarity threshold). Regarding claim 11, Wang in view of Kar teaches the method of claim 1, the method further comprising generating a multiview interactive digital media representation (MVIDMR) that includes the second set of images (Kar ¶¶ 0161–288, application to creating MVIDMR), the MVIDMR being navigable in one or more dimensions along one or more smoothed trajectories (¶¶ 0122, 0124; apparent property of MVIDMR to be navigable between viewpoints). Regarding claim 12, Wang in view of Kar teaches the method of claim 1, wherein the second image is generated via a neural network (Kar ¶ 0051, reconstructing image with novel view by using a convolutional neural network to blend images). Regarding claim 19, Wang in view of Kar teaches the computing apparatus of claim 16, wherein the second plurality of transformations is generated via a neural network (Kar ¶ 0051, reconstructing image with novel view by using a convolutional neural network to blend images), and wherein the first plurality of transformations are provided as reprojection constraints to the neural network (Kar ¶ 0174, points of interest are constrained in search in stabilization algorithm). It would have been obvious to one of ordinary skill in the art at the time of effective filing to perform the Wang method using a neural network, since it has been held that updating prior art systems to use modern electronics is considered within the ordinary skill in the art. Leapfrog Enterprises v. Fisher-Price, Inc., 485 F.3d 1157, 1161–62 (Fed. Cir. 2007). Regarding claim 20, Wang in view of Kar teaches the computing apparatus of claim 19, wherein the neural network includes one or more similarity constraints that penalize deformation of first two-dimensional mesh via the second plurality of transformations (Wang ¶¶ 0047, 0060–62; candidate images must score above predefined similarity threshold). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. The following prior art was found using an Artificial Intelligence assisted search using an internal AI tool that uses the classification of the application under the Cooperative Patent Classification (CPC) system, as well as from the specification, including the claims and abstract, of the application as contextual information. The documents are ranked from most to least relevant. Where possible, English-language equivalents are given, and redundant results within the same patent families are eliminated. See “New Artificial Intelligence Functionality in PE2E Search”, 1504 OG 359 (15 November 2022), “Automated Search Pilot Program”, 90 F.R. 48,161 (8 October 2025). US 2015/0009277 A1 US 2018/0081995 A1 US 2018/0096451 A1 US 2013/0314410 A1 US 2014/0212031 A1 US 2020/0234398 A1 Any inquiry concerning this communication or earlier communications from the examiner should be directed to David N Werner whose telephone number is (571)272-9662. The examiner can normally be reached M--F 7:30--4:00 Central. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Dave Czekaj can be reached at 571.272.7327. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /David N Werner/Primary Examiner, Art Unit 2487 1 See 13 November 2025 Final Rejection at Footnote 1.
Read full office action

Prosecution Timeline

Oct 15, 2021
Application Filed
Mar 25, 2023
Non-Final Rejection — §102, §103
Oct 16, 2023
Response Filed
Oct 26, 2023
Final Rejection — §102, §103
Dec 28, 2023
Response after Non-Final Action
Jan 24, 2024
Request for Continued Examination
Jan 30, 2024
Response after Non-Final Action
Feb 01, 2024
Non-Final Rejection — §102, §103
May 07, 2024
Response Filed
May 07, 2024
Response after Non-Final Action
Sep 15, 2024
Response Filed
Nov 28, 2024
Final Rejection — §102, §103
Jan 27, 2025
Response after Non-Final Action
Feb 25, 2025
Request for Continued Examination
Feb 28, 2025
Response after Non-Final Action
Mar 19, 2025
Non-Final Rejection — §102, §103
Jun 24, 2025
Response Filed
Nov 11, 2025
Final Rejection — §102, §103
Jan 14, 2026
Request for Continued Examination
Jan 25, 2026
Response after Non-Final Action
Jan 27, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598312
OVERHEAD REDUCTION IN MEDIA STORAGE AND TRANSMISSION
2y 5m to grant Granted Apr 07, 2026
Patent 12598297
METHOD AND APPARATUS FOR RECONSTRUCTING 360-DEGREE IMAGE ACCORDING TO PROJECTION FORMAT
2y 5m to grant Granted Apr 07, 2026
Patent 12593144
SOLID STATE IMAGING ELEMENT, IMAGING DEVICE, AND SOLID STATE IMAGING ELEMENT CONTROL METHOD
2y 5m to grant Granted Mar 31, 2026
Patent 12587754
METHOD FOR DYNAMIC CORRECTION FOR PIXELS OF THERMAL IMAGE ARRAY
2y 5m to grant Granted Mar 24, 2026
Patent 12587689
METHOD AND APPARATUS FOR RECONSTRUCTING 360-DEGREE IMAGE ACCORDING TO PROJECTION FORMAT
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

7-8
Expected OA Rounds
68%
Grant Probability
84%
With Interview (+16.2%)
3y 3m
Median Time to Grant
High
PTA Risk
Based on 713 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month