Prosecution Insights
Last updated: April 19, 2026
Application No. 18/801,012

SALIENCY-BASED DIGITAL ENVIRONMENT ADAPTATION

Non-Final OA §102§103
Filed
Aug 12, 2024
Examiner
FLORA, NURUN N
Art Unit
2619
Tech Center
2600 — Communications
Assignee
Microsoft Technology Licensing, LLC
OA Round
1 (Non-Final)
86%
Grant Probability
Favorable
1-2
OA Rounds
2y 1m
To Grant
87%
With Interview

Examiner Intelligence

Grants 86% — above average
86%
Career Allow Rate
331 granted / 387 resolved
+23.5% vs TC avg
Minimal +1% lift
Without
With
+1.3%
Interview Lift
resolved cases with interview
Fast prosecutor
2y 1m
Avg Prosecution
24 currently pending
Career history
411
Total Applications
across all art units

Statute-Specific Performance

§101
5.5%
-34.5% vs TC avg
§103
46.5%
+6.5% vs TC avg
§102
27.1%
-12.9% vs TC avg
§112
9.6%
-30.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 387 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 21-30, 33-40 is/are rejected under 35 U.S.C. 102(a)(1)/(a)(2) as being anticipated by Doken et al. (US 20230259202; hereinafter Doken) Regarding claim 21, Doken discloses a system (title, abstract, fig. 35) comprising: at least one processor (3506, fig. 35, ¶0267, ¶0280); and memory (3508, fig. 35, ¶0271) storing instructions that, when executed by the at least one processor, causes the system to perform a set of operations, the set of operations comprising (¶0267-0276, ¶0280): adapting, for display to a first user, a digital environment to include first content at a location of the digital environment, wherein the first content is determined from a first set of candidate content based on a content saliency metric for the first user and the location is determined based on a location saliency metric for the first user (See ¶0050-0052. In one embodiment, the virtual objects may be obtained from a database, a library of virtual objects, or a plurality of Internet and third-party sources. The virtual objects obtained may be based on user interest. Such interest may be determined based on the user profile. It may also be determined based on the user's consumption history, recommendations made to the user by other users, recommendations made based on analysis performed by an artificial intelligence algorithm, urgent notifications, weather notifications, or items that are trending and popular, ¶0053. Based on user interest, a plurality of virtual objects that can potentially be overlayed may be identified and scored. The score calculation, in one embodiment, may be performed by a scoring engine. The calculation may involve analyzing each virtual object in the library and applying a variety of formulas, weighted averages, means, and other calculations to determine a score, ¶0054 - scores are understood as ‘saliency metric’. Once scored, one or more virtual objects from all the virtual objects identified may be selected. The selection may be based on the highest score, the most relevant score, or the virtual objects associated with the most current need. The selection may also be based on a promotion of the “virtual object” that may be displayed in higher priority by the system owner, ¶0055. Characteristics of the displayed surface, such as different portions or zones of the displayed surface, depth, and different shapes and contours of the surface, may be analyzed, and the resulting data may be used to make determinations of where to overlay or superimpose a virtual object on the surface in the virtual environment, ¶0050. The selected virtual objects may be overlayed on the surface based on restrictions, rules, and guidelines of the surface policies, ¶0055. The overlay or superimposition of the virtual object may be displayed only on the display of the viewing device, such as the display of a mobile phone or the transparent glass of a smart glass or wearable device, such that it provides a feeling as if the virtual object is actually displayed on the physical surface, ¶0069); capturing a representation of the digital environment (The methods analyze a live view of a surface viewed through camera or a transparent lens – Abstract step 410, fig. 4, step 2610, fig. 26 In one embodiment, the scene analyzer 2610 accesses any one or more cameras, GPS, lidars, microphones, barometers or other sensors and components of the electronic device being used by the user to view a surface. The scene analyzer determines the details of a surface such as its dimensions, contours, curvature, size, geometric properties, color, texture, background, depth, and other details that provide a full picture of the surface. In some embodiments, the scene analyzer may determine such details by executing image recognition software, ¶0220); determining, for a second user, second content from a second set of candidate content based on, for the second user, a second content saliency metric associated with each instance of content in the set of candidate content (In some instances, when a policy is not desirable to the user or a group of users, they may send a request to the surface owner to change the policy. When the number of users is a large quantity, the may demand to change surface policy. For example, if a large number of users want to see the virtual object even bigger, more space than regularly allotted by the surface policy, they may request for allotment of larger space and change in the policy to allow them to post the bigger virtual object, ¶0078 That determination of whether to remove or enhance a virtual object either specifically for only one user or for all users may depend upon the amount of interest and use by a majority or at least a threshold number of users in the conference session, ¶0099 See fig. 29, ¶0231-0240. Based on a user’s gaze, user A, B or a group of users per se, a virtual object is enhanced. Also see fig. 3, block 33, ¶0094 and fig. 20); processing the representation of the digital environment to replace the first content with the second content at the location (The virtual object may also be removed or minimized on the user interfaces of other users of the conference that are not engaged with the virtual object. The enhancements may include enlarging the virtual object, highlighting it, changing the opacity, or making other enhancements to display it more prominently than another virtual object with which the user is not engaged, ¶0062 For example, if a large number of users want to see the virtual object even bigger, more space than regularly allotted by the surface policy, they may request for allotment of larger space and change in the policy to allow them to post the bigger virtual object, ¶0076 In yet another embodiment, Macy's may replace a virtual object being overlayed by another virtual object that relates to an urgent event, such as an emergency, an immediate forecast, or any other imminent event, ¶0108 At block 2970, in one embodiment, in response to determining that none of the participants of the conferencing call session is gazing at virtual objects 2-N the control circuitry may either minimize virtual objects 2-N or delete them from that conferencing user interface displayed on the screen of each participant's electronic device, ¶0239. At block 3030, in response to determining that there has been no engagement, or minimal engagement according to certain embodiments, with the virtual object, the control circuitry 3420 may minimize or delete the virtual object from the user interface of the participant, ¶0244); and providing the processed representation for display to the second user (seps 425 fig. 4, 1980 fig. 19, 2060, fig. 20, 2950-2970, fig. 29; 3050, 3060, fig. 30; 3250, fig. 32). Regarding claim 22, Doken discloses the system of claim 21, wherein replacing the first content with the second content comprises overlaying the second content over the first content (detecting live broadcast transmissions; overlaying virtual objects in a frame of a live broadcast, abstract, ¶0048. See step 3250 in fig. 32. One example of such display of overlaying the virtual objects is depicted in block 3330 of FIG. 33. Also steps 1970, 1980, fig. 19. Step 2340, fig. 23. Steps 3020-3060, fig. 30. FIGS. 34-35 may also be used for analyzing surfaces, scoring virtual objects, analyzing policies, overlaying virtual objects, providing virtual object enhancements, detecting live broadcast transmissions, identifying virtual objects in a live broadcast or in an on-demand media asset, overlaying virtual objects in a frame of a live broadcast or in an on-demand media asset, identifying users of a conference call, identifying icons and all tools, functions, and functionalities displayed on an interface of a participant of a conference call, accessing components of electronic devices, such as cameras, gyroscopes, accelerometers, heart rate monitors, invoking an AI or ML algorithm to perform an analysis on any of the above mentioned data, accessing user's consumption history, gauging user's interest in a virtual object, accessing virtual, mixed, or augmented reality headsets and their displays, animating virtual objects, and all the functionalities discussed associated with the figures mentioned in this application, ¶0262). Regarding claim 23, Doken discloses the system of claim 21, wherein the representation comprises at least one of a recording of the digital environment or a stream of the digital environment (Virtual objects are also overlayed in frames of a live broadcast based on their scores. Virtual objects displayed on a conference call interface, which may be meeting tools or icons associated with other conferencing functionality, are enhanced, or removed from the user interface based on their utilization, Abstract Embodiments of the present disclosure relate to displaying, adjusting, and enhancing virtual objects, such as in an augmented or virtual reality, live streaming, or video conferencing environment, based on interactive and dynamic content, ¶0001). Regarding claim 24, Doken discloses the system of claim 21, wherein the second set of candidate content is determined based on the location of the first content (In yet another embodiment, Macy's may allow a virtual object to be posted within a threshold time of an occurrence of an event, such as, if a James Bond movie is playing at 5:00 PM in a theater within a threshold distance of where the Macy's store is located, then Macy's may allow a virtual object relating to the James Bond movie that directs people to the theater to be overlayed only within the two hours before the movie starts, ¶0108). Regarding claim 25, Doken discloses the system of claim 21, wherein at least one of the first content or the second content is associated with another digital environment (For example, the surface owner, such as Macy's, may allow a virtual object to be overlayed only between the hours of 5:00 PM and 8:00 PM. In another embodiment, Macy's may allow a virtual object to be posted on a particular day, such as a Wednesday. In yet another embodiment, Macy's may allow a virtual object to be posted within a threshold time of an occurrence of an event, such as, if a James Bond movie is playing at 5:00 PM in a theater within a threshold distance of where the Macy's store is located, then Macy's may allow a virtual object relating to the James Bond movie that directs people to the theater to be overlayed only within the two hours before the movie starts. In yet another embodiment, Macy's may replace a virtual object being overlayed by another virtual object that relates to an urgent event, such as an emergency, an immediate forecast, or any other imminent event. A variety of other timing, schedule, and duration options may be used by the surface owner to allow a virtual object to be overlayed and then removed, ¶0108). Regarding claim 26, Doken discloses the system of claim 21, wherein each second content saliency metric is determined based on a set of factors associated with the second user (In one embodiment, the virtual objects may be obtained from a database, a library of virtual objects, or a plurality of Internet and third-party sources. The virtual objects obtained may be based on user interest. Such interest may be determined based on the user profile. It may also be determined based on the user's consumption history, recommendations made to the user by other users, recommendations made based on analysis performed by an artificial intelligence algorithm, urgent notifications, weather notifications, or items that are trending and popular, ¶0053). Regarding claim 27, Doken discloses the system of claim 26, wherein the set factors comprises: a content attribute associated with the determined content (Embodiments of the present disclosure relate to displaying, adjusting, and enhancing virtual objects, such as in an augmented or virtual reality, live streaming, or video conferencing environment, based on interactive and dynamic content, ¶0001 In one embodiment, a process for overlaying a virtual object on a surface and enhancing the virtual object based on interactive and dynamic content is disclosed. The process includes detecting a live image of a surface, determining surface policies, obtaining a plurality of virtual objects that align with user interests, scoring the virtual objects and selecting a certain number of virtual objects based on the score, displaying the selected virtual objects if they comply with the surface policies, and enhancing the virtual objects based on interactions, needs, and utilization of the user – ¶0049 In one embodiment, the overlayed virtual object may be enhanced based on interactive and dynamic content received by the control circuitry from the user, group of users, or collective viewers, ¶0056); an environment attribute associated with the digital environment (Virtual objects that meet the policies of the surface and a scoring criterion are overlayed on the surface in the virtual environment and enhanced based on a plurality of enhancement factors, Abstract. See policy module in fig. 9, and display environment restrictions in fig. 10. See block 13 of fig. 1. Also see fig. 8, ¶0143-0148); a user profile attribute associated with the user (In one embodiment, the virtual objects may be obtained from a database, a library of virtual objects, or a plurality of Internet and third-party sources. The virtual objects obtained may be based on user interest. Such interest may be determined based on the user profile, ¶0053. Also see, 2410, fig. 24); and a population attribute associated with a population of the digital environment (In another embodiment, since the PowerPoint document was gazed upon by User 1 and 2 and used by User 3, the control circuitry may determine that a majority of users in the conference session may have an interest in the PowerPoint document. As such, the control circuitry may enhance the PowerPoint icon, such as by enlarging it, on the user interface of all the conference users, ¶0099 At block 2970, when several participants are involved, in one embodiment, the control circuitry may determine whether a majority of the participants have gazed on a particular virtual object, such as virtual object 2. In response to determining that a majority of the participants of the conference call session, but not all of the participants, have gazed at virtual object 2, the control circuitry may determine that virtual object 2 is important to this conferencing session and as such enhance virtual object 2 for all of the participants on the user interfaces displayed on their electronic devices. Such enhancement of virtual object 2 may be displayed even on the user interfaces of those participants that have not gazed on virtual object 2, since a majority may deem the tool as important to the conference call. Other factors, besides having a majority, may also be considered in determining whether to enhance a virtual object only on the user interfaces of the participants that have gazed at the virtual object or on all the user interfaces of all of the participants of the conferencing call, regardless of their gazes, ¶0240). Regarding claim 28, Doke discloses a method for content adaptation of a digital environment (title, abstract), the method comprising: obtaining a set of factors associated with a digital environment (The method fetches a virtual object and calculates a score based on user interest, user gaze, and user engagement. Virtual objects that meet the policies of the surface and a scoring criterion are overlayed on the surface in the virtual environment and enhanced based on a plurality of enhancement factors, Abstract Embodiments of the present disclosure relate to displaying, adjusting, and enhancing virtual objects, such as in an augmented or virtual reality, live streaming, or video conferencing environment, based on interactive and dynamic content, ¶0001. Figs. 10-11, shows how virtual objects are placed on the virtual environment. Also see figs. 14-17, 25); determining, based on the set of factors, a mechanic from a set of candidate mechanics based on a saliency metric for a user associated with each mechanic in the set of candidate mechanics (See ¶0050-0052. In one embodiment, the virtual objects may be obtained from a database, a library of virtual objects, or a plurality of Internet and third-party sources. The virtual objects obtained may be based on user interest. Such interest may be determined based on the user profile. It may also be determined based on the user's consumption history, recommendations made to the user by other users, recommendations made based on analysis performed by an artificial intelligence algorithm, urgent notifications, weather notifications, or items that are trending and popular, ¶0053. Based on user interest, a plurality of virtual objects that can potentially be overlayed may be identified and scored. The score calculation, in one embodiment, may be performed by a scoring engine. The calculation may involve analyzing each virtual object in the library and applying a variety of formulas, weighted averages, means, and other calculations to determine a score, ¶0054 - scores are understood as ‘saliency metric’. Once scored, one or more virtual objects from all the virtual objects identified may be selected. The selection may be based on the highest score, the most relevant score, or the virtual objects associated with the most current need. The selection may also be based on a promotion of the “virtual object” that may be displayed in higher priority by the system owner, ¶0055 Characteristics of the displayed surface, such as different portions or zones of the displayed surface, depth, and different shapes and contours of the surface, may be analyzed, and the resulting data may be used to make determinations of where to overlay or superimpose a virtual object on the surface in the virtual environment, ¶0050. The selected virtual objects may be overlayed on the surface based on restrictions, rules, and guidelines of the surface policies, ¶0055. The overlay or superimposition of the virtual object may be displayed only on the display of the viewing device, such as the display of a mobile phone or the transparent glass of a smart glass or wearable device, such that it provides a feeling as if the virtual object is actually displayed on the physical surface, ¶0069 In one embodiment, a policy module 900 from FIG. 9 may be invoked by the control circuitry to determine the polices of the surface, different zones of the surface, and determine how the policies apply to the virtual object to be posted. As depicted, various polices 905-930 may be analyzed to determine a fit with the virtual object to be overlayed, ¶0149); adapting the digital environment to include the determined mechanic, thereby modifying the digital environment to offer an individualized experience to the user (ibid determining to modify the displayed virtual object, wherein the determination is based on a calculated score of a plurality of factors relating to user interest; and in response to determining to modify the virtual object: modifying one or more attributes of the virtual object; transmitting the virtual object with the modified one or more attributes to the portable mixed reality, virtual reality, or augmented reality device; and rendering the modified virtual object as an overlay on the surface, claim 34). Regarding claim 29, Doken discloses the method of claim 28, wherein adapting the digital environment comprises replacing an existing mechanic with the determined mechanic (The virtual object may also be removed or minimized on the user interfaces of other users of the conference that are not engaged with the virtual object. The enhancements may include enlarging the virtual object, highlighting it, changing the opacity, or making other enhancements to display it more prominently than another virtual object with which the user is not engaged, ¶0062 For example, if a large number of users want to see the virtual object even bigger, more space than regularly allotted by the surface policy, they may request for allotment of larger space and change in the policy to allow them to post the bigger virtual object, ¶0076 In yet another embodiment, Macy's may replace a virtual object being overlayed by another virtual object that relates to an urgent event, such as an emergency, an immediate forecast, or any other imminent event, ¶0108 At block 2970, in one embodiment, in response to determining that none of the participants of the conferencing call session is gazing at virtual objects 2-N the control circuitry may either minimize virtual objects 2-N or delete them from that conferencing user interface displayed on the screen of each participant's electronic device, ¶0239. At block 3030, in response to determining that there has been no engagement, or minimal engagement according to certain embodiments, with the virtual object, the control circuitry 3420 may minimize or delete the virtual object from the user interface of the participant, ¶0244). Regarding claim 30, Doken discloses the method of claim 28, wherein the set factors comprises: a content attribute associated with the determined content (Embodiments of the present disclosure relate to displaying, adjusting, and enhancing virtual objects, such as in an augmented or virtual reality, live streaming, or video conferencing environment, based on interactive and dynamic content, ¶0001 In one embodiment, a process for overlaying a virtual object on a surface and enhancing the virtual object based on interactive and dynamic content is disclosed. The process includes detecting a live image of a surface, determining surface policies, obtaining a plurality of virtual objects that align with user interests, scoring the virtual objects and selecting a certain number of virtual objects based on the score, displaying the selected virtual objects if they comply with the surface policies, and enhancing the virtual objects based on interactions, needs, and utilization of the user, ¶0049 In one embodiment, the overlayed virtual object may be enhanced based on interactive and dynamic content received by the control circuitry from the user, group of users, or collective viewers, ¶0056); an environment attribute associated with the digital environment (Virtual objects that meet the policies of the surface and a scoring criterion are overlayed on the surface in the virtual environment and enhanced based on a plurality of enhancement factors, Abstract. See policy module in fig. 9, and display environment restrictions in fig. 10. See block 13 of fig. 1 Also see fig. 8, ¶0143-0148); a user profile attribute associated with the user (See ¶0050-0052. In one embodiment, the virtual objects may be obtained from a database, a library of virtual objects, or a plurality of Internet and third-party sources. The virtual objects obtained may be based on user interest. Such interest may be determined based on the user profile. It may also be determined based on the user's consumption history, recommendations made to the user by other users, recommendations made based on analysis performed by an artificial intelligence algorithm, urgent notifications, weather notifications, or items that are trending and popular, ¶0053. Also see 2410, fig. 24); and a population attribute associated with a population of the digital environment (In another embodiment, since the PowerPoint document was gazed upon by User 1 and 2 and used by User 3, the control circuitry may determine that a majority of users in the conference session may have an interest in the PowerPoint document. As such, the control circuitry may enhance the PowerPoint icon, such as by enlarging it, on the user interface of all the conference users, ¶0099 At block 2970, when several participants are involved, in one embodiment, the control circuitry may determine whether a majority of the participants have gazed on a particular virtual object, such as virtual object 2. In response to determining that a majority of the participants of the conference call session, but not all of the participants, have gazed at virtual object 2, the control circuitry may determine that virtual object 2 is important to this conferencing session and as such enhance virtual object 2 for all of the participants on the user interfaces displayed on their electronic devices. Such enhancement of virtual object 2 may be displayed even on the user interfaces of those participants that have not gazed on virtual object 2, since a majority may deem the tool as important to the conference call. Other factors, besides having a majority, may also be considered in determining whether to enhance a virtual object only on the user interfaces of the participants that have gazed at the virtual object or on all the user interfaces of all of the participants of the conferencing call, regardless of their gazes, ¶0240). Regarding claim 33, Doken discloses, the method of claim 28, wherein the digital environment is an augmented reality digital environment (¶0001). Regarding method claim(s) 34-40, although wording is different, the material is considered substantively equivalent to the system claim(s) 21-27 as described above. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 31 is/are rejected under 35 U.S.C. 103 as being unpatentable over Doken in view of Mizuno et al. (US 20190199997 A1, hereinafter Mizuno). Regarding claim 31, Doken discloses the method of claim 28, except, wherein the mechanic comprises at least one of a different gameplay progression or a different difficulty level for the digital environment. However, Mizuno discloses that a virtual object to be synthesized showing additional information such as progression of a capturing-target game, or an image showing statistical information related to a game (¶0096). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention (AIA ) to modify the invention of Doken, such that synthesizing of the virtual image of Doken further is based on gameplay progression in a gaming scenario, to obtain, wherein the mechanic comprises at least one of a different gameplay progression or a different difficulty level for the digital environment, because, combining prior art elements ready to be improved according to known method to yield predictable results is obvious. Furthermore, such combination would enhance the versatility of the overall system. Claim 32 is/are rejected under 35 U.S.C. 103 as being unpatentable over Doken in view of Wakeford et al. (US 8944908 B1, hereinafter Wakeford). Regarding claim 32, Doken discloses the method of claim 28, except, wherein adapting the digital environment further comprises determining whether to spawn a non-player character depending on an aptitude of the user for the mechanic. However, Wakeford discloses spawning of non-layer Character in the virtual environment based on user set parameters (Col. 9, lines 3-11). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention (AIA ) to modify the invention of Doken with the teaching of spawning non-player character for a virtual gaming environment to be augmented with the virtual environment disclosed by Doken, to obtain, wherein adapting the digital environment further comprises determining whether to spawn a non-player character depending on an aptitude of the user for the mechanic. Furthermore, such combination would enhance the versatility of the overall system. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to NURUN FLORA whose telephone number is (571)272-5742. The examiner can normally be reached M-F 9:30 am -5:00 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jason Chan can be reached at (571) 272-3022. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /NURUN FLORA/Primary Examiner, Art Unit 2619
Read full office action

Prosecution Timeline

Aug 12, 2024
Application Filed
Oct 03, 2025
Response after Non-Final Action
Feb 21, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592025
IMAGE RENDERING BASED ON LIGHT BAKING
2y 5m to grant Granted Mar 31, 2026
Patent 12586250
COMPRESSION AND DECOMPRESSION OF SUB-PRIMITIVE PRESENCE INDICATIONS FOR USE IN A RENDERING SYSTEM
2y 5m to grant Granted Mar 24, 2026
Patent 12586254
High-quality Rendering on Resource-constrained Devices based on View Optimized RGBD Mesh
2y 5m to grant Granted Mar 24, 2026
Patent 12579751
TECHNIQUES FOR PARALLEL EDGE DECIMATION OF A MESH
2y 5m to grant Granted Mar 17, 2026
Patent 12561896
INSERTING THREE-DIMENSIONAL OBJECTS INTO DIGITAL IMAGES WITH CONSISTENT LIGHTING VIA GLOBAL AND LOCAL LIGHTING INFORMATION
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
86%
Grant Probability
87%
With Interview (+1.3%)
2y 1m
Median Time to Grant
Low
PTA Risk
Based on 387 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month