Prosecution Insights
Last updated: April 19, 2026
Application No. 18/590,385

SYSTEMS AND METHODS FOR GENERATING OVERLAYS OF 3D MODELS IN 2D CONTENT ITEMS

Final Rejection §102§103
Filed
Feb 28, 2024
Examiner
HAILU, TADESSE
Art Unit
2174
Tech Center
2100 — Computer Architecture & Software
Assignee
Adeia Guides Inc.
OA Round
2 (Final)
78%
Grant Probability
Favorable
3-4
OA Rounds
3y 4m
To Grant
82%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
747 granted / 960 resolved
+22.8% vs TC avg
Minimal +4% lift
Without
With
+4.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
29 currently pending
Career history
989
Total Applications
across all art units

Statute-Specific Performance

§101
5.8%
-34.2% vs TC avg
§103
38.1%
-1.9% vs TC avg
§102
41.1%
+1.1% vs TC avg
§112
9.0%
-31.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 960 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . 2. This Office Action is in response to the Amendment filed on Jan 08, 2026. Response to Arguments 3. Applicant's arguments filed Jan 08, 2026 have been fully considered but they are not persuasive. The applicant argues that Francois fails to teach or suggest "determining that [a] first object is displayed in a threshold number of consecutive frames of [a] 2D content item" and, in response to that determination, "retrieving a three-dimensional (3D) model of a second object based on [] at least one attribute of the first object." The examiner disagrees as illustrated in at least Fig. 14, determining that a first object is displayed in a threshold number of consecutive frames of a 2D content item, For example Francois discloses the interface 1404 may include a count of the number of frames or images captured to complete the 360-degree composite image and alignment aids to assist the user in maintaining a consistent orientation as the user traverses the path about the object. column 28, lines 43-51). Francois further discloses retrieving a three-dimensional (3D) model of a second object based on the at least one attribute of the first object (at shown in Fig. 15, the vehicle is now shown with different orientation and direction, which is different than the first attribute or orientation of the vehicle in Fig. 14 , column 29, lines 40-54. Furthermore, FIG. 17 depicts the interface of FIG. 14 including an image of a vehicle taken at approximately 58 degrees of rotation during the 360-degree composite image capture process, in accordance with certain embodiments of the present disclosure. FIG. 18 depicts the interface of FIG. 14 including an image of a vehicle taken at approximately 160 degrees of rotation during the 360-degree composite image capture process. Thus, argument is not persuasive, the rejection is maintained and it is FINAL Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. 4. Claims 1-3, 5-7, 10-13, 15-17 and 20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Francois et al (US 10,284,794 B1). The current invention is directed to systems and methods for generating overlays of 3D models in 2D content items. Similarly Francois et al (us 10,284,794 b1) is directed to three-dimensional stabilized 360-degree composite image capture. As per clam 1, Francois discloses a method (flowcharts of Figs. 7-13, and 30-31) comprising: receiving, at a user interface of a computing device, during display of a two-dimensional (2D) content item, a user interaction associated with a first object displayed in the 2D content item ( In some embodiments, the composite image may be presented within an interface (such as a graphical user interface) through which the composite image may be interactive. For example, a user may use a touch screen interface (e.g. display interface 212 and input interface 210) to rotate the composite image in one direction or another direction, causing the interface to present images of the composite image in a smooth manner. column 11, lines 40-47) and in response to determining that the first object is displayed in a threshold number of consecutive frames of the 2D content item (FIG. 14 depicts an interface 1400 including an image of a vehicle 1402 between visual alignment aids, in accordance with certain embodiments of the present disclosure. The interface 1404 may include a count of the number of frames or images captured to complete the 360-degree composite image and alignment aids to assist the user in maintaining a consistent orientation as the user traverses the path about the object. column 28, lines 43-51); identifying the first object (vehicle 1402) and at least one attribute (see direction and orientation of the vehicle) of the first object (see captured image shown at different orientations (attributes) in Figs. 14-21); retrieving a three-dimensional (3D) model of a second object based on the at least one attribute of the first object (at shown in Fig. 15, the vehicle is now shown with different orientation and direction, which is different than the first attribute of Fig. 14, column 29, lines 40-54); providing for display an overlay of the 3D model of the second object at the computing device during display of the 2D content item (for example, FIG. 15 illustrates the interface of FIG. 14 including an indicator showing a change in the orientation of the computing device, that is, Fig. 15 illustrates an overlay of the 3D model of the second object (i.e., the vehicle 1402 is shown in different orientation) during display of the 2D content item (see background image ) as shown in Fig. 14-21) As per claim 2, Francois further discloses that the method of claim 1, further comprising: in response to the providing for display the overlay of the 3D model of the second object at the computing device, receiving a second user interaction at the overlay of the 3D model of the second object (see for example FIG. 15, the user interacts with the displayed interactive element/icon to change the direction/orientation of the vehicle. In the illustrated embodiment, of Fig. 15, the interface 1500 provides visual feedback indicating a change with regard to the viewing angle. Column 29, lines 40-54); and in response to the receiving the second user interaction, modifying at least one of an orientation or a size of the overlay of the 3D model (FIG. 17 depicts an embodiment 1700 of the interface of FIG. 14 including an image of a vehicle taken at approximately 58 degrees of rotation during the 360-degree composite image capture process, in accordance with certain embodiments of the present disclosure. The interface 1700 includes a circular indicator showing a number of degrees that the user has moved about the object ,column 29, lines 55-64). As per claim 3, Francois further discloses that the method of claim 2, further comprising: in response to the receiving the second user interaction, providing for display data of the second object at the user interface of the computing device (For example, the system may be configured to provide an interface including the three-dimensional model and including one or more user-selectable elements (checkboxes, pulldown menus, and so on). A user may interact with the one or more user-selectable elements to alter particular characteristics of the 360-degree composite images. In a particular example, the user may adjust a paint color of a vehicle or introduce other modification (such as adding a spoiler to a car), which changes can be updated in real time. Column 35, lines 3-34). As per claim 5, Francois further discloses that the method of claim 1, wherein the second object is the first object (as shown in several Figs. 14-21, Vehicle 1402 is shown in different orientations and/or directions. That is for example vehicle 1402 (second object) in Fig. 15 is shown in different orientation and/or direction as that of vehicle 1402 (first object) Fig. 14. Thus, other than having different orientations/directions vehicle 1402 is the same vehicle (same object)). As per claim 6, Francois further discloses that the method of claim 1, wherein the second object is different from the first object (for example vehicle 1402 (second object) in Fig. 15 is shown in different orientation and/or direction as that of vehicle 1402 (first object) Fig. 14. Thus, since vehicle in each figure do not have the same orientations and/or directions the second vehicle is different from the first vehicle, see at least Figs. 14 & 15). As per claim 7, Francois further discloses that the method of claim 1, further comprising: retrieving a 3D model of a third object based on at least one attribute of the second object (as shown in Figs. 15 and 16, the vehicle of Fig. 16 (third object) has one similar attribute (e.g., vehicle facing direction, vehicle color, etc. ) with the vehicle of Fig. 15). providing for display an overlay of the 3D model of the third object at the computing device during display of the 2D content item ( vehicle 1402 of Fig. 16 is shown displayed over captured image or background image ) generating for display a prompt at the user interface of the computing device (the method 1200 may include prompting the user to select the element in a different second image, at 1212 (The method 1200 may then return to 1206 to receive a second user input. Column 26, lines 56-59); receiving a second user interaction at the overlay of the 3D model of the second object via the 3D model of the third object, wherein the second user interaction is responsive to the prompt (In some embodiments, if the selected element cannot be resolved from the received inputs, the interface may prompt the user to re-select the element within the 360 composite image. Alternatively, the mesh may be used to reliably associate a single user input selection with a 3D element corresponding to the user's selection. Other embodiments are also possible. Column 27, lines 28-34); and in response to the receiving the second user interaction: terminating display of the prompt at the user interface of the computing device (FIG. 27 depicts the interface 2700 accessible by the user to label the three-dimensional tag, in accordance with certain embodiments of the present disclosure. The interface 2700 can be used to add text, images, video, or any combination thereof for association with the three-dimensional point. In this example, the interface 2700 includes a text input 2704 and a keypad 2702 accessible by a user to label the three-dimensional tag. Once the user selects the “Done” button, the tag labeling operation may be complete. Subsequently, the user may double-tap (double-click) on the newly-created three-dimensional tag to see the images stabilized around the three-dimensional point associated with the three-dimensional tag. Column 32, lines 24-36); and displaying a portion of the 2D content item at the computing device based on the second user interaction (as shown in Fig. 27, as the result of the user prompt (“tire”) the display of Fig. 28, highlight one of the tire of the car over the portion of the background image ) . As per claim 10, Francois further discloses that the method of claim 1, wherein the providing for display the overlay of the 3D model of the second object comprises: in response to identifying coordinates of the first object in at least one frame of the consecutive frames of the 2D content item, providing for display the overlay of the 3D model at coordinates proximate to the identified coordinates of the first object (Fig. 5 is a block diagram 500 of a sequence of frames including selected feature points that may be used to adjust frames to provide composite image capture, in accordance with certain embodiments of the present disclosure. The diagram 500 includes a set of three consecutive image frames: Frame A 502, frame B 504, and frame C 506. Each of the depicted frames includes a corresponding feature point: feature point 508, 510, and 512, respectively. The feature point 508, 510, and 512 in each frame may correspond to the same point or feature on a subject. Between the times when the three depicted frames were captured, the x-y coordinates of the feature points changed because the position and/or orientation of the camera device changed. With respect to a pixel location, the feature point 508 may be located at an x-y pixel position (9, 13); the feature point 510 may be located at an x-y pixel position (5, 11); and the feature point 512 may be located at an x-y pixel position (3, 12)). As per system claims 11-13, 15-17, and 20, these claims are also rejected under similar citations given to the method claims 1-3, 5-7, and 10, respectively. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 5. Claims 4 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Francois in view of Sanchez (US 20230394530 A1). As per claim 4, and 14 Francois falls short to disclose the data of the second object is advertising data. Sanchez relates generally to the field of virtual reality interactive advertisement, specifically, real time personalization for interactive advertisements for a user within a virtual environment where advertisement is being served. Sanchez as illustrated in Fig. 8 discloses a functional block diagram of a 2D/3D Interactive Advertisement as Experience component 800 where the computing device is configured to generate a selected Ad 870 based on receiving a user Id 810 and number of other parameters, where the user ID 810 may be a unique identification number associated with the user. [0059] a User Ads Database 827, where the User Past Ad Interactions 826 may be in communication with the User Ads Database 827 and the User Ads Database 827 may be configured to store all user ad interactions history for all users and all available ads. User Past Ad Interactions 826 may be information of previous ads, for example, a user had selected “Red” when prompted for the color of a specific model car interactive advertisement. With all the above info, the disclosed embodiments provide a method for a new advertisement to assume the user likes sport cars and prefers the color red. Before effective filling date of the invention, it would have been obvious to a person of ordinary skill in the art to employ 2D/3D interactive advertisements of Sanchez with Francois so that a user/potential buyer of a vehicle would be interested to view/read the ad before purchasing a vehicle. Therefore, it would have been obvious to combine Francois and Sanchez to obtain the invention as specified in claim 4 and 14. 6. Claims 8-9 and 18-19 are rejected under 35 U.S.C. 103 as being unpatentable over Francois in view of Southin et al (US 12,394,193 B2). As per claim 8, Francois further discloses that the method of claim 1, wherein the computing device is an extended reality (XR) device (the system may provide a form of augmented reality through which a user can optionally visualize any additional vehicle part option as it would look from every image in the 360-degree composite image, column 35, lines 31-34. In some embodiments, in the case of a stereoscopic virtual reality or augmented reality headset, the display provided for each eye may be indexed to a different view (based on yaw angle) from the 360-degree image capture, such that the vehicle appears three-dimensional to the user, column 36, lines 15-19); Francois falls short to teach analyzing at least one frame of the consecutive frames of the 2D content item; and outpainting the at least one frame using generative artificial intelligence (AI). Southin generally relates to the field of computing platforms, artificial intelligence, computer vision, and image processing. In particular, this disclosure relates to systems and methods of processing images to segment an object into its constituent parts. In some embodiments, the system may analyze the captured images to determine the appropriate cage. In some embodiments where the appropriate cage cannot be determined, the system may then select a cage that is similar to the captured object or it may generate a new cage (e.g., based on the captured images and/or other cages in its data storage), column 20, lines 58-column 21, line 6). Collectively, front cage 502, side cage 602, and rear cage 702 may make up a complete Scalable Vector Cage of, for example, a vehicle (i.e., each cage 502, 602, 702 is a cage for a view of the object, collectively making up the full cage for that object). column 28, lines 43-52). In this exemplary process 400, the user 10 captures an image of the object (here a vehicle) (402). The image is then processed using advanced machine learning algorithms to extract imagery based vehicle information (406). In some embodiments, process 400 can process the image using various semantic segmentation algorithms that use Deep Neural Networks to process images in a pixel format. In some embodiments, process 400 can use prompt based segmentation approaches and Generative AI methods for image processing. Column 26, lines 5-14). Since both Francois and Southin are directed to viewing and manipulating 3D images of a vehicle, before effective filling date of the invention, it would have been obvious to a person of ordinary skill in the art to employ the generative AI of Southin with the system of Francois so that time and cost efficiency, new creativity and innovations and engagement in the creative process and manipulation of the vehicle in 3D models would be efficiently improved. Therefore, it would have been obvious to combine Francois and Southin to obtain the invention as specified in claim 8. As per claim 9, Francois in view of Southin further discloses that the method of claim 8, further comprising: analyzing an environment proximate to the XR device (Francois, in some embodiments, in the case of a stereoscopic virtual reality or augmented reality headset, the display provided for each eye may be indexed to a different view (based on yaw angle) from the 360-degree image capture, such that the vehicle appears three-dimensional to the user. In some embodiments, this effect may be best perceived when the vehicle is being viewed as a miniature composite image, such that the separation between the user's left and right eyes lead them to index into separate stabilized images from the capture. Alternatively, a large number of images may be captured to enhance the stereoscopic effect at higher magnifications. column 36, lines 15-26); projecting the outpainted frame at the environment proximate to the XR device (Francois, In some embodiments, a the 3D-stabilized 360-degree composite image can be viewed in both virtual reality and augmented reality by mapping the same 3D information used for stabilization (3D center of mass and gravity vector) to a physical position in the augmented or virtual space, such that physically rotating the viewer's position around that center indexes into a succession of appropriate images that replicate the viewing experience when walking around the physical vehicle. Column 35, lines 67-column 36, lines 7); and receiving a second user interaction at the outpainted frame, wherein at least one object of the outpainted frame is interactive (Francois, FIG. 22 depicts an alternative view of the interface 2200 including image alignment aids 2208, in accordance with certain embodiments of the present disclosure. Col 30, lines 63-65. Note: in Fig. 22, vehicle is shown in an outpainted frame). As per system claims 18-19, the system claims are also rejected under similar citations given to the method claims 8-9, respectively. Conclusion 7. THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. 8. Any inquiry concerning this communication or earlier communications from the examiner should be directed to TADESSE HAILU whose telephone number is (571)272-4051; and the email address is Tadesse.hailu@USPTO.GOV. The examiner can normally be reached Monday- Friday 9:30-5:30 (Eastern time). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bashore, William L. can be reached (571) 272-4088. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TADESSE HAILU/ Primary Examiner, Art Unit 2174
Read full office action

Prosecution Timeline

Feb 28, 2024
Application Filed
Oct 08, 2025
Non-Final Rejection — §102, §103
Jan 08, 2026
Response Filed
Jan 28, 2026
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596435
CONTACT OR CONTACTLESS INTERFACE WITH TEMPERATURE HAPTIC FEEDBACK
2y 5m to grant Granted Apr 07, 2026
Patent 12578976
SYSTEMS AND METHODS FOR AFFINITY-DRIVEN INTERFACE GENERATION
2y 5m to grant Granted Mar 17, 2026
Patent 12578849
METHOD, APPARATUS, ELECTRONIC DEVICE AND READABLE STORAGE MEDIUM FOR PAGE PROCESSING
2y 5m to grant Granted Mar 17, 2026
Patent 12572198
USER INTERFACES FOR GAZE TRACKING ENROLLMENT
2y 5m to grant Granted Mar 10, 2026
Patent 12566621
CUSTOMIZATION AND ENRICHMENT OF USER INTERFACES USING LARGE LANGUAGE MODELS
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
78%
Grant Probability
82%
With Interview (+4.5%)
3y 4m
Median Time to Grant
Moderate
PTA Risk
Based on 960 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month