Prosecution Insights
Last updated: April 19, 2026
Application No. 18/827,717

APPARATUS, METHOD AND COMPUTER PROGRAM FOR PRODUCING A STREAM OF VIDEO DATA OF A VIRTUAL ENVIRONMENT

Non-Final OA §102§103
Filed
Sep 07, 2024
Examiner
FLORA, NURUN N
Art Unit
2619
Tech Center
2600 — Communications
Assignee
Beyond Sports B V
OA Round
1 (Non-Final)
86%
Grant Probability
Favorable
1-2
OA Rounds
2y 1m
To Grant
87%
With Interview

Examiner Intelligence

Grants 86% — above average
86%
Career Allow Rate
331 granted / 387 resolved
+23.5% vs TC avg
Minimal +1% lift
Without
With
+1.3%
Interview Lift
resolved cases with interview
Fast prosecutor
2y 1m
Avg Prosecution
24 currently pending
Career history
411
Total Applications
across all art units

Statute-Specific Performance

§101
5.5%
-34.5% vs TC avg
§103
46.5%
+6.5% vs TC avg
§102
27.1%
-12.9% vs TC avg
§112
9.6%
-30.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 387 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-4, 15, 20 is/are rejected under 35 U.S.C. 102(a)(1) and/or 102(a)(2) as being anticipated by Cappello et al (US 20200035019 A1, hereinafter Cappello). Regarding claim 1, Cappello discloses an apparatus for producing a stream of video data of a virtual environment (¶0001, fig. 5, ¶0037-0040, ¶0053, claim 15 and dependents), the apparatus comprising circuitry configured to: acquire a virtual environment, the virtual environment being output of a simulation of a physical environment (Hence in an embodiment of the present invention, a received 2D stream of a live event can be used to drive an augmented or virtual representation of the event in which one or more of the live event participants are replaced by virtual avatars, and alternatively or in addition, optionally the viewpoint of the event can also be modified by the viewer, as described herein below, ¶0071.); detect occurrence of an event in the virtual environment (…the virtual camera angle may be determined based on events that are detected as occurring within the video. In the example of a football game, the detected events may correspond to e.g. a goal, offside, foul, throw-in, corner, free-kick, etc, ¶0076. Also see ¶0079-0080, and claims 4, 11, 16); control at least one virtual camera in the virtual environment in dependence upon a detected event within the virtual environment, wherein the at least one virtual camera captures a view within the virtual environment (In additional or alternative embodiments, the virtual camera angle may be determined based on events that are detected as occurring within the video. In the example of a football game, the detected events may correspond to e.g. a goal, offside, foul, throw-in, corner, free-kick, etc, ¶0076. Once the relevant event has been detected, the view processor 506 may determine a corresponding virtual camera angle from which that event is to be viewed in the graphical representation of the scene. In some examples, this may involve selecting a predetermined position and/or orientation of the virtual camera that has been determined (e.g. by a developer) as being appropriate for that event, ¶0079. In some embodiments, the virtual camera angle may be determined based on one or more players that are detected as contributing to a detected event. In the example of a football game, this may involve for example, detecting a first player and second player as contributing to an event (e.g. such as an assist and a subsequent goal), and determining a virtual camera angle that enables the actions of both players to be seen in the graphical representation. In some examples, this might involve determining a virtual camera angle that corresponds to the view of one of the players on the pitch. For example, in the event of a foul, the virtual camera angle may correspond to the view point of a referee that is detected as being on the pitch. This may allow a user to see (a graphical representation) of what the referee could see before the referee made his/her decision, ¶0080); and produce a stream of video data of the virtual environment using the at least one virtual camera (Hence in an embodiment of the present invention, a received 2D stream of a live event can be used to drive an augmented or virtual representation of the event in which one or more of the live event participants are replaced by virtual avatars, and alternatively or in addition, optionally the viewpoint of the event can also be modified by the viewer, as described herein below, ¶0071. Having determined the virtual camera angle, the view processor 506 transmits an indication of the virtual camera angle, and the graphical representation of the scene, to an output unit (not shown). The output unit outputs (i.e. renders) an image corresponding to the view of the graphical representation, from the determined virtual camera angle. This view may then be displayed to the user, at their display device, ¶0081). Regarding claim 2, Cappello discloses the apparatus according to claim 1, wherein the physical environment is a sporting match (In preferred embodiments, the video is of a real-time event, such as a live sporting event. The sporting event may be, for example, a football match, and the three-dimensional scene may correspond to part of the pitch captured by the video camera, ¶0038. Also see ¶0057, ¶0061, ¶0089, clam 4). Regarding claim 3, Cappello discloses the apparatus according to claim 2, wherein the event is an event in the sporting match comprising at least one of: a shot, a goal, an offside, a sending off, a substitution, a foul and/or a pass (In additional or alternative embodiments, the virtual camera angle may be determined based on events that are detected as occurring within the video. In the example of a football game, the detected events may correspond to e.g. a goal, offside, foul, throw-in, corner, free-kick, etc, ¶0076). Regarding claim 4, Cappello discloses the apparatus according to claim 1, wherein when the at least one virtual camera comprises a plurality of virtual cameras, each of the plurality of virtual cameras captures a different view within the virtual environment (In some examples, where available (e.g. if at the broadcasting side) the image generator 508 may generate a representation of the player based on images from multiple cameras, using a known photogrammetry technique, ¶0069). Regarding claim 15, Cappello discloses the apparatus according to claim 1, wherein the circuitry is further configured to: detect an interruption in the virtual environment (In additional or alternative embodiments, the virtual camera angle may be determined based on events that are detected as occurring within the video. In the example of a football game, the detected events may correspond to e.g. a goal, offside, foul, throw-in, corner, free-kick, etc, ¶0076-0079), control at least one virtual camera in the virtual environment in dependence upon a previously detected event within the virtual environment (The events may be detected, for example, using machine learning. For example, a machine learning model may be trained with video clips of known events and labels of those events, and trained to determine a correlation between the content of those video clips and the corresponding labels, ¶0079. In some embodiments, the virtual camera angle may be determined based on one or more players that are detected as contributing to a detected event. In the example of a football game, this may involve for example, detecting a first player and second player as contributing to an event (e.g. such as an assist and a subsequent goal), and determining a virtual camera angle that enables the actions of both players to be seen in the graphical representation, ¶0080.), and produce a stream of video data of the virtual environment using the at least one virtual camera (Hence in an embodiment of the present invention, a received 2D stream of a live event can be used to drive an augmented or virtual representation of the event in which one or more of the live event participants are replaced by virtual avatars, and alternatively or in addition, optionally the viewpoint of the event can also be modified by the viewer, as described herein below, ¶0071. Having determined the virtual camera angle, the view processor 506 transmits an indication of the virtual camera angle, and the graphical representation of the scene, to an output unit (not shown). The output unit outputs (i.e. renders) an image corresponding to the view of the graphical representation, from the determined virtual camera angle. This view may then be displayed to the user, at their display device, ¶0081). Regarding claim 20, Cappello discloses a non-transitory computer readable medium storing a computer program comprising instructions that, when executed by a computer, cause the computer to perform a method of producing a stream of video data of a virtual environment (¶0101, claims 13-14), the method comprising: receiving a virtual environment from a virtual environment provider, the virtual environment being output of a simulation of a physical environment; detect occurrence of an event in the virtual environment; controlling at least one virtual camera in the virtual environment in dependence upon a detected event within the virtual environment, wherein the at least one virtual camera captures a view within the virtual environment; and producing a stream of video data of the virtual environment using the at least one virtual camera (see substantively similar claim 1 rejection above). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 5, 6, and 9-13, 16-18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Cappello in view of Tanaka et al. (US 20200329189 A1, hereinafter Tanaka). Regarding claim 5, Cappello discloses the apparatus according to claim 1, except, wherein in order to control the at least one virtual camera in accordance with the detected event, the circuitry is configured to: select a control script corresponding to the detected event from a selection of control scripts stored in a storage unit, and control the at least one virtual camera using control instructions contained in the control script. However, Tanaka discloses, that a plurality of selectable scenes corresponding to a plurality of virtual camera paths may be stored in the virtual camera management unit 08130. When the plurality of virtual camera paths are stored in the virtual camera management unit 08130, metadata including scripts of scenes corresponding to the virtual camera paths, elapsed times of a game, prescribed times before and after the scenes, and player information may also be input and stored [¶0174]. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention (AIA ) to modify the invention of Cappello, with the teaching of Tanaka such that in case a particular event is detected, the virtual camera is controlled thereafter based on a scripted path, wherein the script is stored in a storage unit, to obtain, wherein in order to control the at least one virtual camera in accordance with the detected event, the circuitry is configured to: select a control script corresponding to the detected event from a selection of control scripts stored in a storage unit, and control the at least one virtual camera using control instructions contained in the control script, because, combining prior art elements ready to be improved according to known method to yield predictable results is obvious. Furthermore, such combination would improve the efficiency of the overall system, by enabling event driven automation of the virtual camera. Regarding claim 6, Cappello in view of Tanaka discloses the apparatus according to claim 5, wherein the control instructions include instructions defining: a selection of a virtual camera (Tanaka: ¶0057, fig. 40, ¶0411), a time period for how long a virtual camera should be selected (Tanaka: ¶0174, ¶0175), a movement of a virtual camera (Tanaka: ¶0395), a transition between virtual cameras (Tanaka: ¶0411, ¶0416) and/or instructions to generate a new virtual camera in the virtual environment. Regarding claim 9, Cappello discloses the apparatus according to claim 5, wherein when multiple control scripts exist for the detected event, the circuitry is configured to execute the control scripts in accordance with a priority level of the control scripts (Tanaka: ¶0416). Regarding claim 10, Cappello discloses the apparatus according to claim 9, wherein the circuitry is configured to execute the script with the highest priority level; or wherein the circuitry is configured to execute the multiple control scripts in a sequence starting with the script with the highest priority level (Tanaka: ¶0416). Regarding claim 11, Cappello discloses the apparatus according to claim 1, except, wherein the circuitry is configured to acquire the virtual environment from a virtual environment provider. However, Tanaka discloses that camera path management 08106 of virtual camera management unit 08130 unit stores the environmental metadata include scene name, player information, elapsed time and a prescribed time before and after the scene which are associated with the virtual camera path … etc. (¶0176). Back-end server 270 stores all these information received from virtual camera management unit 08130 unit (¶0175). User selects a virtual camera path from a name of a scene, a player or an elapsed time of the game from the back-end server 270. The back-end server 270 interactively provides scene images to the end-user-terminal 190 (¶0175). Therefore, back-end server 270 functions as a virtual environment provider, wherefrom a user using the terminal 190 can obtain the virtual scene/environment based on user selection (¶0174-0176, fig. 8). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention (AIA ) to modify he invention of Cappello, with the teaching of Tanaka of storing and providing the scene information according to user selection and saved configuration, to obtain, wherein the circuitry is configured to acquire the virtual environment from a virtual environment provider, because, combining prior art elements ready to be improved according to known method to yield predictable results is obvious. Furthermore, such combination would enhance the versatility of the overall system. Regarding claim 12, Cappello in view of Tanaka discloses the apparatus according to claim 11, wherein the circuitry is configured to store the virtual environment in a storage unit as it is received from the virtual environment provider (Tanaka: The end-user terminal 190 requests generation of an image corresponding to the selected virtual camera path to the back-end server 270 so as to interactively obtain an image delivery service, ¶0175. Also see ¶0174-0176, fig. 8). Regarding claim 13, Cappello in view of Tanaka discloses the apparatus according to claim 12, wherein the circuitry is configured to introduce a predetermined time delay between the stream of video data and the virtual environment as acquired from the virtual environment provider (Tanaka: Furthermore, the generation method may be determined in accordance with a length of an allowable processing delay time in a period from when imaging is performed to when an image is output. In a case where priority is given to a degree of freedom even though a delay time is long, the MBR is used whereas in a case where a reduction of a delay time is required, the IBR is used, ¶0157. As described above, the virtual camera path management unit 08106 stores the metadata including a scene name, a player, an elapsed time, and a prescribed time before and after the scene which are associated with the virtual camera path 08002. For example, the virtual camera path 08002 having a scene name “goal scene” and a prescribed time before and after the scene of 10 seconds in total is extracted. Furthermore, the authoring unit, ¶0176). Regarding claim 16, Cappello discloses the apparatus according to claim 15, except, wherein in order to control the at least one virtual camera in accordance with the previously detected event, the circuitry is configured to: select a replay control script corresponding to the previously detected event from a selection of replay control scripts stored in a storage unit, and control the at least one virtual camera using control instructions contained in the replay control script. However, Tanaka discloses, select a replay control script that is stored in a storage unit, and which corresponding previously detected event from a selection of replay control scripts, controls a virtual camera using control instructions contained in the replay control script (When the plurality of virtual camera paths are stored in the virtual camera management unit 08130, metadata including scripts of scenes corresponding to the virtual camera paths, elapsed times of a game, prescribed times before and after the scenes, and player information may also be input and stored. The virtual camera operation UI 330 notifies the back-end server 270 of these virtual camera paths as virtual camera parameters, ¶0174. An authoring unit 08107 has a function of performing editing when the operator generates a replay image. The authoring unit 08107 extracts a portion of the virtual camera path 08002 stored in the virtual camera path management unit 08106 as an initial value of the virtual camera path 08002 for a replay image in response to a user operation. As described above, the virtual camera path management unit 08106 stores the metadata including a scene name, a player, an elapsed time, and a prescribed time before and after the scene which are associated with the virtual camera path 08002. For example, the virtual camera path 08002 having a scene name “goal scene” and a prescribed time before and after the scene of 10 seconds in total is extracted. Furthermore, the authoring unit 08107 sets a reproduction speed in an edited camera path. For example, slow reproduction is set to the virtual camera path 08002 during a ball flies to a goal. Note that, when the image is replaced by another image from another viewpoint, that is, when the virtual camera path 08002 is changed, the user operates the virtual camera 08001 again using the virtual camera operation unit 08101, ¶0176). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention (AIA ) to modify the invention of Cappello, with the teaching of Tanaka of, select a replay control script that is stored in a storage unit, and which corresponding previously detected event from a selection of replay control scripts, controls a virtual camera using control instructions contained in the replay control script, because, combining prior art elements ready to be improved according to known method to yield predictable results is obvious. Furthermore, such combination would enhance the automation of the operation of the virtual camera and thus improving versatility of the overall system. Regarding claim 17, Cappello discloses the apparatus according to claim 15, except, when a plurality of events have been previously detected, the circuitry is configured to select one or more previously detected events from the plurality of previously detected events in accordance with at least one of: a priority level of each of the plurality of previously detected events, an order of each of the plurality of previously detected events, a maximum number of previously detected events which can be selected and/or a category of each of the plurality of previously detected events. However, Tanaka discloses, camera management through selecting a previously stored scripts, wherein a priority of an event is used for selection of the event (When the plurality of virtual camera paths are stored in the virtual camera management unit 08130, metadata including scripts of scenes corresponding to the virtual camera paths, elapsed times of a game, prescribed times before and after the scenes, and player information may also be input and stored, ¶0174). Tanaka further discloses, selection of a higher priority event based camera is selected over a lower priority one (An authoring unit 08107 has a function of performing editing when the operator generates a replay image. The authoring unit 08107 extracts a portion of the virtual camera path 08002 stored in the virtual camera path management unit 08106 as an initial value of the virtual camera path 08002 for a replay image in response to a user operation. As described above, the virtual camera path management unit 08106 stores the metadata including a scene name, a player, an elapsed time, and a prescribed time before and after the scene which are associated with the virtual camera path 08002. For example, the virtual camera path 08002 having a scene name “goal scene” and a prescribed time before and after the scene of 10 seconds in total is extracted. Furthermore, the authoring unit 08107 sets a reproduction speed in an edited camera path. For example, slow reproduction is set to the virtual camera path 08002 during a ball flies to a goal. Note that, when the image is replaced by another image from another viewpoint, that is, when the virtual camera path 08002 is changed, the user operates the virtual camera 08001 again using the virtual camera operation unit 08101, ¶0176 Furthermore, in a case where the image generation corresponding to a virtual camera path 08002 having a high priority degree is requested, image generation of a virtual camera path 08002 having a low priority degree may be performed later, ¶0416. Also see ¶0443). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention (AIA ) to modify the invention of Cappello, with the teaching of Tanaka of, selecting a previously stored scripts, wherein a priority of an event is used for selection of the event, to obtain, when a plurality of events have been previously detected, the circuitry is configured to select one or more previously detected events from the plurality of previously detected events in accordance with at least one of: a priority level of each of the plurality of previously detected events, an order of each of the plurality of previously detected events, a maximum number of previously detected events which can be selected and/or a category of each of the plurality of previously detected events, because, combining prior art elements ready to be improved according to known method to yield predictable results is obvious. Furthermore, such combination would enhance the automation of the operation of the virtual camera and thus improving versatility of the overall system. Regarding claim 18, Cappello discloses the apparatus according to claim 15, except, wherein the control instructions of the replay control scripts include instructions defining: a selection of a virtual camera, a time period for how long a virtual camera should be selected, a movement of a virtual camera, a transition between virtual cameras, instructions to generate a new virtual camera in the virtual environment, a playback speed for a virtual camera, and/or a portion of the previously detected event for which a virtual camera should be selected. However, Tanaka discloses replay control scripts include instructions defining: a selection of a virtual camera, a time period for how long a virtual camera should be selected, a movement of a virtual camera, a transition between virtual cameras, instructions to generate a new virtual camera in the virtual environment, a playback speed for a virtual camera, and/or a portion of the previously detected event for which a virtual camera should be selected (When the plurality of virtual camera paths are stored in the virtual camera management unit 08130, metadata including scripts of scenes corresponding to the virtual camera paths, elapsed times of a game, prescribed times before and after the scenes, and player information may also be input and stored. The virtual camera operation UI 330 notifies the back-end server 270 of these virtual camera paths as virtual camera parameters, ¶0174. An authoring unit 08107 has a function of performing editing when the operator generates a replay image. The authoring unit 08107 extracts a portion of the virtual camera path 08002 stored in the virtual camera path management unit 08106 as an initial value of the virtual camera path 08002 for a replay image in response to a user operation. As described above, the virtual camera path management unit 08106 stores the metadata including a scene name, a player, an elapsed time, and a prescribed time before and after the scene which are associated with the virtual camera path 08002. For example, the virtual camera path 08002 having a scene name “goal scene” and a prescribed time before and after the scene of 10 seconds in total is extracted. Furthermore, the authoring unit 08107 sets a reproduction speed in an edited camera path. For example, slow reproduction is set to the virtual camera path 08002 during a ball flies to a goal. Note that, when the image is replaced by another image from another viewpoint, that is, when the virtual camera path 08002 is changed, the user operates the virtual camera 08001 again using the virtual camera operation unit 08101, ¶0176. Furthermore, in a case where the image generation corresponding to a virtual camera path 08002 having a high priority degree is requested, image generation of a virtual camera path 08002 having a low priority degree may be performed later, ¶0416. Also see ¶0443). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention (AIA ) to modify the invention of Cappello, with the teaching of Tanaka of, the control instructions of the replay control scripts include instructions defining: a selection of a virtual camera, a time period for how long a virtual camera should be selected, a movement of a virtual camera, a transition between virtual cameras, instructions to generate a new virtual camera in the virtual environment, a playback speed for a virtual camera, and/or a portion of the previously detected event for which a virtual camera should be selected, because, combining prior art elements ready to be improved according to known method to yield predictable results is obvious. Furthermore, such combination would enhance the automation of the operation of the virtual camera and thus improving versatility of the overall system. Allowable Subject Matter Claims 7-8, 14, and 19 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: Regarding claim 7, prior arts of record taken alone or in combination fails to reasonably disclose or suggest, wherein the storage unit stores different control scripts for different users and wherein the circuitry is configured: to receive user identification to identify a user, and wherein in order to control the at least one virtual camera, the circuitry is configured to select a control script corresponding to the detected event and the identified user from a selection of control scripts storage unit, and control the at least one virtual camera using control instructions contained in the control script. Regarding claim 14, prior arts of record taken alone or in combination fails to reasonably disclose or suggest, wherein the circuitry is configured to pre-select a virtual camera in the virtual environment upon detection of the detected event and control the pre-selected virtual camera in the virtual environment in dependence upon the detected event within the virtual environment when a time corresponding to the predetermined time delay has expired after detection of the detected event. Regarding claim 19, prior arts of record taken alone or in combination fails to reasonably disclose or suggest, the apparatus according to claim 16, wherein the storage unit stores different replay control scripts for different users and wherein the circuitry is configured: to receive user identification to identify a user, and wherein in order to control the at least one virtual camera, the circuitry is configured to select a replay control script corresponding to the detected event and the identified user from a selection of replay control scripts storage unit, and control the at least one virtual camera using control instructions contained in the replay control script. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to NURUN FLORA whose telephone number is (571)272-5742. The examiner can normally be reached M-F 9:30 am -5:00 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jason Chan can be reached at (571) 272-3022. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /NURUN FLORA/Primary Examiner, Art Unit 2619
Read full office action

Prosecution Timeline

Sep 07, 2024
Application Filed
Mar 04, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592025
IMAGE RENDERING BASED ON LIGHT BAKING
2y 5m to grant Granted Mar 31, 2026
Patent 12586250
COMPRESSION AND DECOMPRESSION OF SUB-PRIMITIVE PRESENCE INDICATIONS FOR USE IN A RENDERING SYSTEM
2y 5m to grant Granted Mar 24, 2026
Patent 12586254
High-quality Rendering on Resource-constrained Devices based on View Optimized RGBD Mesh
2y 5m to grant Granted Mar 24, 2026
Patent 12579751
TECHNIQUES FOR PARALLEL EDGE DECIMATION OF A MESH
2y 5m to grant Granted Mar 17, 2026
Patent 12561896
INSERTING THREE-DIMENSIONAL OBJECTS INTO DIGITAL IMAGES WITH CONSISTENT LIGHTING VIA GLOBAL AND LOCAL LIGHTING INFORMATION
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
86%
Grant Probability
87%
With Interview (+1.3%)
2y 1m
Median Time to Grant
Low
PTA Risk
Based on 387 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month