Prosecution Insights
Last updated: April 19, 2026
Application No. 18/794,323

TWO DIMENSIONAL TO THREE DIMENSIONAL MOVING IMAGE CONVERTER

Non-Final OA §102§103§DP
Filed
Aug 05, 2024
Examiner
KRZYSTAN, ALEXANDER J
Art Unit
2694
Tech Center
2600 — Communications
Assignee
Mr Steven M Hoffberg
OA Round
1 (Non-Final)
81%
Grant Probability
Favorable
1-2
OA Rounds
3y 1m
To Grant
88%
With Interview

Examiner Intelligence

Grants 81% — above average
81%
Career Allow Rate
913 granted / 1121 resolved
+19.4% vs TC avg
Moderate +7% lift
Without
With
+6.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
38 currently pending
Career history
1159
Total Applications
across all art units

Statute-Specific Performance

§101
2.7%
-37.3% vs TC avg
§103
37.1%
-2.9% vs TC avg
§102
24.3%
-15.7% vs TC avg
§112
21.0%
-19.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1121 resolved cases

Office Action

§102 §103 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application is being examined under the pre-AIA first to invent provisions. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-20 of U.S. Patent No. 12058306. Although the claims at issue are not identical, they are not patentably distinct from each other because The application claim 1 claims a broader version of the same system claimed by the patent claim 1. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of pre-AIA 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (b) the invention was patented or described in a printed publication in this or a foreign country or in public use or on sale in this country, more than one year prior to the date of application for patent in the United States. Claim(s) 1-4,6-9,11-13,15-17,20 is/are rejected under pre-AIA 35 U.S.C. 102b as being anticipated by Zhang (US 20110096832 A1). As per claim 1, Zhang discloses a method comprising: receiving an ordered set of two-dimensional images representing a moving object from a perspective (para 70, the object and 2d video input ); analyzing the two-dimensional images to determine a motion vector of an object and the perspective (para 70, part of : the process of estimating object movement through adjacent video frames); and generating a synthetic view of the object from a different perspective, dependent on at least the motion vector (the motion vector can be used to estimate depth which can be used in the 3d view per para 68). As per claim 2, the method according to claim 1, wherein the synthetic view comprises a second ordered set of two-dimensional images from the different perspective and having the same motion vector (the 3d views can be in the context of h.264 per para 87: 3D input which is represented by two views). As per claim 3, the method according to claim 2, wherein the synthetic view together with the ordered set of two-dimensional images comprise a stereoscopic video image (the 3d image represented by two views per the claim 2 rejection). As per claim 4, the method according to claim 1, further comprising generating a three dimensional model of the object from the ordered set of two-dimensional images (the depth maps cited in the above rejections as used for the 2d to 3d conversion per para 76). As per claim 6, the method according to claim 1, wherein the synthetic view comprises stereoscopic image pairs (para 75 synthesized stereo pairs). As per claim 7, the method according to claim 1, wherein the synthetic view comprises a second ordered set of two-dimensional images representing the moving object from the different perspective (para 75 synthesized stereo pairs versed on the cited motion vector based processing cited above). As per claim 8, the method according to claim 1, further comprising looking up a record associated with the moving object to determine a state of a hidden surface in at least one two-dimensional image (looking up the record of the parameters used to perform the processes in para 81,82, regarding the 3d warping and or the orientation processing, where orientation and 3d warping are each determined states of the hidden surfaces (the parts of the 3d representation of the objects that are not currently In view). As per claim 9, Zhang discloses A method comprising: receiving a representation of a two-dimensional image (para 70, the object and 2d video input ); determining a perspective and a depth gradient of the two-dimensional image (the orientation and depth maps per para 41, and the perspective via any of the parameters for the orientation based processing per para 78); predicting a characteristic of a hidden surface of at least one object in the two-dimensional image (the estimated depth maps and or additional views per para 54) ; and visually representing the object as stereoscopic images (the synthesized 3d pairs of video images per para 75). As per claim 11, the method according to claim 9, further comprising transforming the perspective of the two-dimensional image to a different perspective prior to visually representing the object as the stereoscopic images (the 3d warping process per para 76, or the orientation based processing per para 78). As per claim 12, the method according to claim 11, wherein the transforming comprises converting the two-dimensional image to a three dimensional image (the generated 3d sequence per para 91). As per claim 13, the method according to claim 11, wherein two dimensional image comprises a video image, and the transformation of the two-dimensional image to the three dimensional image occurs in real time at a rate of the video image (para. 28, the system can be part of a codec used in video communications which requires said processing be performed at a real time rate, for the purpose of allowing video communication). As per claim 15, the system of the claim 1 rejection requires a non-transitory computer readable medium, comprising: instructions for automatically (the system can be used as part of communications process where all of the cited functions must be in realtime in order to perform the communications, 29,31) analyzing a set of images to determine at least one perspective view of an object (any of the parameters or inputs used to make the motion vector based on the 2d input per the claim 1 rejection); instructions for automatically (same reasoning as cited above) determining a characteristic of the object in the set of images (the depth, location, or orientation determinations per the claim 1 rejection); instructions for predicting a state of a hidden surface of the object in at least one image of the set of images based on at least the characteristic (the synthesized 3d view comprises a predicted state of the hidden or depth surface of an object as part of the 3d video noting para 34 (2d to 3d conversion); and generating an output image representing the object comprising a view of at least a portion of the hidden surface based on the predicted state of the hidden surface (the 3d warping and view synthesis per para 76 and 77 provides a hidden surface and also the conversion from 2d to 3d via adding depth per para 34 and the claim 1 rejection). As per claim 16, the non-transitory computer readable medium according to claim 15, wherein the output image comprises a stereoscopic image (the 3d video cited above). As per claim 17, the non-transitory computer readable medium according to claim 16, further comprising instructions for generating a three-dimensional model of the object, wherein the output image is generated dependent on the three-dimensional model(the parameters supporting the synthesizing based processing cited in the claim 1 rejection in order to synthesize a 3d views from a 2d view via a depth map and motion vectors). As per claim 20,the non-transitory computer readable medium according to claim 15, further comprising instructions for determining a motion vector of the object (per the claim 1 rejection), wherein the output image comprises a synthetic view of the object dependent on the determined motion vector (the synthesized 3d image based on the 2d image and the motion vector). Claim Rejections - 35 USC § 103 The following is a quotation of pre-AIA 35 U.S.C. 103(a) which forms the basis for all obviousness rejections set forth in this Office action: (a) A patent may not be obtained though the invention is not identically disclosed or described as set forth in section 102, if the differences between the subject matter sought to be patented and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art to which said subject matter pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 5,14 are rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over Zhang (US 20110096832 A1) as applied to claim 1 above. As per claim 5, Zhang discloses the method according to claim 1, wherein the synthetic view of the object from a different perspective is generated via a volumetric data transform (part of the 2d-3d video conversion cited above), but does not specify the use of a single-instruction multiple-data (SIMD) processor which performs the volumetric data transform. The examiner takes official notice it is well known in the art to implement well known processor architectures to perform the cited 2d-3d video conversion, including the volumetric transforms, for the purpose of conforming to well known processing standards and architectures. As per claim 14, the method according to claim 11, further comprising transforming information of a series of the two-dimensional images with a single-instruction, multiple-data (SIMD) processor (per the claim 5 rejection). Claims 10,18,19 rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over Zhang (US 20110096832 A1) as applied to claim 1 above and further in view of Algreatly (US 20100080485 A1). As per claim 10, Zhang discloses the method according to claim 9, but does not specify wherein the determining the perspective and depth gradient comprises extracting at least one of a vanishing line and a vanishing point. Algreatly teaches an improved interface for 3d object manipulation by a user including modifying a perspective/orientation and a position which includes depth. Algreatly teaches that vanishing points are extracted and used in processing the perspective and depth/position per para 74,75,78. It would have been obvious to one skilled in the art at the time of filing to extract vanishing points for the purpose of allowing manipulation of the perspective and depth gradient of Zhang for the advantage of an improved interface to manipulate 3d objects. As per claim 18, the non-transitory computer readable medium according to claim 15, further comprising instructions for extracting at least one depth gradient in the at least one image based on at least one vanishing line (para 70,71, and per the claim 10 rejection) and at least one vanishing point (para 70,71 and per the claim 10 rejection). As per claim 19, the prior art cited above discloses the non-transitory computer readable medium according to claim 18, however Zhang and Algreatly do not specify: wherein the automatically determining a characteristic of the object comprises performing a lookup of the object in a database. The examiner takes official notice it is well known in the art to use lookup tables and prestored function results and objects for the purpose of improved processing architectures. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALEXANDER KRZYSTAN whose telephone number is 571-272-7498, and whose email address is alexander.krzystan@uspto.gov The examiner can usually be reached on m-f 7:30-4:00 est. If attempts to reach the examiner by telephone or email are unsuccessful, the examiner’s supervisor, Fan Tsang can be reached on (571) 272-7547. The fax phone numbers for the organization where this application or proceeding is assigned are 571-273-8300 for regular communications and 571-273-8300 for After Final communications. /ALEXANDER KRZYSTAN/Primary Examiner, Art Unit 2653 Examiner Alexander Krzystan March 31, 2026
Read full office action

Prosecution Timeline

Aug 05, 2024
Application Filed
Mar 26, 2026
Non-Final Rejection — §102, §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598440
RENDERING OF OCCLUDED AUDIO ELEMENTS
2y 5m to grant Granted Apr 07, 2026
Patent 12593170
SWITCHING METHOD FOR AUDIO OUTPUT CHANNEL, AND DISPLAY DEVICE
2y 5m to grant Granted Mar 31, 2026
Patent 12573410
DECODER, ENCODER, AND METHOD FOR INFORMED LOUDNESS ESTIMATION IN OBJECT-BASED AUDIO CODING SYSTEMS
2y 5m to grant Granted Mar 10, 2026
Patent 12574675
Acoustic Device and Method
2y 5m to grant Granted Mar 10, 2026
Patent 12541554
TRANSCRIPT AGGREGATON FOR NON-LINEAR EDITORS
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
81%
Grant Probability
88%
With Interview (+6.9%)
3y 1m
Median Time to Grant
Low
PTA Risk
Based on 1121 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month