Prosecution Insights
Last updated: April 19, 2026
Application No. 18/730,695

VIRTUAL CHARACTER CONTROL METHOD, APPARATUS, DEVICE AND STORAGE MEDIUM

Non-Final OA §101§102§103
Filed
Jul 19, 2024
Examiner
YANG, ANDREW GUS
Art Unit
2614
Tech Center
2600 — Communications
Assignee
Hangzhou Alicloud Apsara Information Technology Co. Ltd.
OA Round
1 (Non-Final)
69%
Grant Probability
Favorable
1-2
OA Rounds
2y 10m
To Grant
77%
With Interview

Examiner Intelligence

Grants 69% — above average
69%
Career Allow Rate
384 granted / 558 resolved
+6.8% vs TC avg
Moderate +8% lift
Without
With
+8.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
25 currently pending
Career history
583
Total Applications
across all art units

Statute-Specific Performance

§101
9.2%
-30.8% vs TC avg
§103
61.9%
+21.9% vs TC avg
§102
17.1%
-22.9% vs TC avg
§112
6.6%
-33.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 558 resolved cases

Office Action

§101 §102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim 14 is rejected under 35 U.S.C. 101 because claim 14 is directed towards a computer-readable storage medium. The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter because the claimed invention is directed towards a computer-readable storage medium, which includes a transitory medium. See MPEP 2106.01. The broadest reasonable interpretation of a claim drawn to a computer readable medium (also called machine readable medium and other such variations) typically covers forms of non-transitory tangible media and transitory propagating signals per se in view of the ordinary and customary meaning of computer readable media, particularly when the specification is silent. “A transitory, propagating signal … is not a “process, machine, manufacture, or composition of matter.” Those four categories define the explicit scope and reach of subject matter patentable under 35 U.S.C. § 101; thus, such a signal cannot be patentable subject matter.” (In re Nuijten, 84 USPQ2d 1495 (Fed. Cir. 2007)). Because the full scope of the claim as properly read in light of the disclosure appears to encompass non-statutory subject matter (i.e., because the specification defines/exemplifies a computer readable medium as a non-statutory signal, carrier waver, etc.) the claim as a whole is non-statutory. (See 1351 OG 212). A claim drawn to such a computer readable medium that covers both transitory and non-transitory embodiments may be amended to narrow the claim to cover only statutory embodiments to avoid a rejection under 35 US.C. § 101 by adding the limitation "non-transitory" to the claim. Any amendment to the claim should be commensurate with its corresponding disclosure. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-2, 5-9, 11-12, 14-15, and 18-21 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Roche et al. (U.S. Patent No. 10,521,946). With respect to claim 1, Roche et al. disclose a virtual character control method, wherein the method comprises: acquiring one or more target keywords in a preset text (column 23, lines 46-48, As shown in FIG. 6A, a first input of text 602 may be received and processed to identify keywords associated with animation sequences); for each of the one or more target keywords, determining a first target character (column 23, lines 52-56, The first skeletal animation sequence 606 may cause animation of skeletal components of the avatar to extend arms with palms of hands facing upwards to create a gesture associated with "I don't know") and a second target character from the preset text, wherein the first target character corresponds to an action start position of the target keyword (column 23, lines 59-62, The animation may be triggered to start performance upon playback of the audio data at a certain word (e.g., the first keyword 604, etc.) or at a time code associated with the word or words that trigger the animation sequence) and the second target character corresponds to an action end position of the target keyword (column 24, lines 1-5, After completion of the first avatar animation sequence associated with "I don't know", the animation manager 204A may animate the avatar to a standard pose, such as to render the avatar in a standing position with hands in a relaxed and downward position near the avatar's waist); predicting an audio broadcasting duration from the first target character to the second target character (column 25, lines 43-53, The time codes 642 may span an amount of time (e.g., ti) used to playback the audio of the text or portion of the text. The time codes 642 may include first time codes 650 that indicate a time of occurrence of each word in the text. The time codes 642 may include second time codes 652 that indicate a time of occurrence of each phonic symbol in the text. The combined time codes may represent third time codes, which may be used to cue the different animation sequences 648, such as skeletal animations and facial animations); determining a target action file from one or more preset action files corresponding to the target keyword, wherein the target action file is used for driving a virtual character to perform a target action to obtain a target action video (column 23, lines 48-52, the text "I don't know if this is a true story" may be processed by the animation manager 204A to determine that one or more of the words "I don't know" include a first keyword(s) 604 that is associated with a first skeletal animation sequence 606), and a time duration of the target action video matches the audio broadcasting duration (column 23, lines 62-67, column 24, line 1, The first skeletal animation sequence 606 may last only part of the time that the animation manager 204A processes animations for facial features (e.g., mouth movements, etc.), which animate speaking of the text as discussed below in FIG. 6B. The animations may synchronized with playback of the audio data); driving the virtual character in real time according to audio information of the preset text and a respective target action file corresponding to each target keyword to generate multimedia information (column 23, lines 38-45, FIG. 6A is a schematic diagram 600 of different illustrative animation body sequences for an avatar based on different input words spoken by the avatar. In some examples, a collection of animation sequences may be associated with different words, combination of words, phonic symbols, and/or other parts of speech, which may be included in the speech markup data (SMD) that is derived from the text), wherein the multimedia information comprises the audio information and a respective target action video corresponding to each target keyword (column 7, lines 28-31, The visual content might include a display of three dimensional graphical models presented in a virtual environment, GUI elements, text, images, video). Fig. 6A shows driving the avatar 608, 616, 624, and 632 according to the audio information from the text 602, 610, 618, and 626, based on action files 606, 614, 622, and 630. With respect to claim 2, Roche et al. disclose the method according to claim 1, wherein the first target character is a first character of the target keyword (column 23, lines 52-56, The first skeletal animation sequence 606 may cause animation of skeletal components of the avatar to extend arms with palms of hands facing upwards to create a gesture associated with "I don't know"), or the first target character is a character before the target keyword by a preset number of characters; the second target character is a last character of the target keyword (column 24, lines 1-5, After completion of the first avatar animation sequence associated with "I don't know", the animation manager 204A may animate the avatar to a standard pose, such as to render the avatar in a standing position with hands in a relaxed and downward position near the avatar's waist), or the second target character is a last character of a sentence to which the target keyword belongs, or the second target character is a character before a next target keyword of the target keyword, or the second target character is a character after the last character of the target keyword by a preset number of characters. Fig. 6A shows the condition of the first character “I” of the target keyword 604 and last character “know” of the target keyword 604. With respect to claim 5, Roche et al. disclose the method according to claim 1, wherein driving the virtual character in real time according to the audio information of the preset text and the respective target action file corresponding to each target keyword to generate the multimedia information comprises: if the time duration of the target action video corresponding to the target keyword is less than the audio broadcasting duration, determining a time duration of driving the virtual character with a default action file (column 24, lines 2-5, the animation manager 204A may animate the avatar to a standard pose, such as to render the avatar in a standing position with hands in a relaxed and downward position near the avatar's waist), wherein the time duration of driving the virtual character with the default action file is a difference value between the audio broadcasting duration and the time duration of the target action video (column 24, lines 23-27, if playback of a portion of the audio data takes 10 seconds and the limit for each sequence is 3 seconds (N=3 seconds), then the playback could only have up to three different animation sequences that include gestures based on keywords); driving the virtual character in real time according to the audio information of the preset text, the respective target action file corresponding to each target keyword and the default action file, to generate the multimedia information (column 24, lines 52-55, The second skeletal animation sequence 614 may be preceded by the avatar being in the standard pose while the words "there was once a really" are played back). With respect to claim 6, Roche et al. disclose the method according to claim 5, wherein the default action file drives the virtual character after the target action file corresponding to the target keyword; or the default action file drives the virtual character before the target action file corresponding to the target keyword (column 24, lines 52-55, The second skeletal animation sequence 614 may be preceded by the avatar being in the standard pose while the words "there was once a really" are played back). The default action file corresponding to the standard pose drives the virtual character before the target keyword “big.” With respect to claim 7, Roche et al. disclose the method according to claim 1, wherein before driving the virtual character in real time according to the audio information of the preset text and the respective target action file corresponding to each target keyword to generate the multimedia information, the method further comprises: aligning a broadcasting moment of an audio of a target key character in the target keyword with a playing moment of a key frame in the target action video corresponding to the target keyword (column 25, lines 65-67, column 26, lines 1-5, The animation sequences 648 and corresponding phonic symbols 646 may be stored in the data store 202A. The animation sequences 648 may be associated with the third time codes and may be played back at occurrence of the third time code to synchronize movement of facial features and/or mouth movements with playback of the audio data to animate speaking by the avatar); driving the virtual character in real time according to the audio information of the preset text and the respective target action file corresponding to each target keyword to generate the multimedia information comprises: driving the virtual character in real time according to the audio information of the preset text and the respective target action file corresponding to each target keyword to generate the multimedia information, so that the audio of the target keyword and the key frame are played at a same moment (column 23, line 67, column 24, line 1, The animations may synchronized with playback of the audio data, column 31, lines 44-50, At 1114, the animation service 104A may output the audio data synchronized with output of combined animation sequences of the avatar that include the one or more first animation sequences and the one or more second animation sequences. The output may be a file, such as a downloadable animation file that includes the animation, the sound, or both). With respect to claim 8, Roche et al. disclose the method according to claim 7, wherein aligning the broadcasting moment of the audio of the target key character in the target keyword with the playing moment of the key frame in the target action video corresponding to the target keyword comprises: predicting whether to play the target key character at a second moment after a first moment, wherein a time duration between the first moment and the second moment is a preset time duration (column 24, lines 12-19, Those additional animation sequences may or may not be selected for use by the animation manager 204A, such as based on application of rules. The rules may establish an amount of time (e.g., a falloff time) between animation sequences, which animations can precede or follow other animation sequences, and amounts of buffer time between (for execution of the standard pose), and so forth); if the target key character is to be played at the second moment, aligning a playing moment of a start frame of the target action video corresponding to the target keyword with the first moment, wherein a time duration between the start frame and the key frame is the preset time duration (column 24, lines 29-32, The rules may be stored in the data store 202A. The following provides additional examples of text and animation sequences to further illustrate the concepts discussed herein). With respect to claim 9, Roche et al. disclose the method according to claim 8, wherein the multimedia information comprises the audio information and a virtual character broadcasting video (column 7, lines 29-31, The visual content might include a display of three dimensional graphical models presented in a virtual environment, GUI elements, text, images, video), the virtual character broadcasting video comprises the respective target action video corresponding to each target keyword, and the virtual character broadcasting video corresponds to the preset text (column 23, lines 52-59, The first skeletal animation sequence 606 may cause animation of skeletal components of the avatar to extend arms with palms of hands facing upwards to create a gesture associated with "I don't know". The animation manager 204A may cause rendering of texture mappings (e.g., clothing, skin, etc. on avatars) to depict a first avatar animation sequence 608); an initial broadcasting moment of the audio information is delayed by the preset time duration compared with an initial broadcasting moment of the virtual character broadcasting video (column 23, line 67, column 24, line 1, The animations may synchronized with playback of the audio data). With respect to claim 11, Roche et al. disclose a virtual character control apparatus (column 36, lines 41-45, FIG. 14 shows an example computer architecture for a computer 1400 capable of executing program components for providing a framework for utilizing different services to interact with VR/AR applications in the manner described above), comprising: at least one processor (column 36, lines 63-65, one or more central processing units ("CPUs") 1404 operate in conjunction with a chipset 1406) and a memory; the memory stores computer executable instructions (column 37, lines 15-22, The chipset 1406 may provide an interface to a RAM 1408, used as the main memory in the computer 1400. The chipset 1406 may further provide an interface to a computer-readable storage medium such as a read-only memory ("ROM") 1410 or non-volatile RAM ("NVRAM") for storing basic routines that help to startup the computer 1400 and to transfer information between the various components and devices); the at least one processor executes the computer executable instructions stored in the memory to execute the method of claim 1; see rationale for rejection of claim 1. With respect to claim 12, Roche et al. disclose the apparatus according to claim 11, wherein the processor is further configured to execute the method of claim 7; see rationale for rejection of claim 7. With respect to claim 14, Roche et al. disclose a computer-readable storage medium having a computer program stored thereon (column 37, lines 17-25, The chipset 1406 may further provide an interface to a computer-readable storage medium such as a read-only memory ("ROM") 1410 or non-volatile RAM ("NVRAM") for storing basic routines that help to startup the computer 1400 and to transfer information between the various components and devices. The ROM 1410 or NVRAM may also store other software components necessary for the operation of the computer 1400 in accordance with the examples described herein), wherein when the computer program is executed by a processor, the processor is caused to execute the operations of claim 1; see rationale for rejection of claim 1. With respect to claim 15, Roche et al. disclose the apparatus according to claim 11 for executing the method of claim 2; see rationale for rejection of claim 2. With respect to claim 18, Roche et al. disclose the apparatus according to claim 11, wherein the processor is specifically configured to execute the method of claim 5; see rationale for rejection of claim 5. With respect to claim 19, Roche et al. disclose the apparatus according to claim 18 for executing the method of claim 6; see rationale for rejection of claim 6. With respect to claim 20, Roche et al. disclose the apparatus according to claim 12 for executing the method of claim 8; see rationale for rejection of claim 8. With respect to claim 21, Roche et al. disclose the apparatus according to claim 20 for executing the method of claim 9; see rationale for rejection of claim 9. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 3 and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Roche et al. (U.S. Patent No. 10,521,946) in view of Niehaus et al. (U.S. PGPUB 20200388269). With respect to claim 3, Roche et al. disclose the method according to claim 1. However, Roche et al. do not expressly disclose predicting the audio broadcasting duration from the first target character to the second target character comprises: predicting the audio broadcasting duration from the first target character to the second target character according to a number of characters and a number of punctuation marks between the first target character and the second target character, an audio broadcasting duration of a single character and a pause duration of each punctuation mark. Niehaus et al., who also deal with audio presentation in computer graphics, disclose a method wherein predicting the audio broadcasting duration from the first target character to the second target character comprises: predicting the audio broadcasting duration from the first target character to the second target character according to a number of characters and a number of punctuation marks between the first target character and the second target character, an audio broadcasting duration of a single character and a pause duration of each punctuation mark (paragraph 141, determine the estimated time to present the subsequent portion of the audio presentation based on a feature of the text content of the plurality of unreviewed electronic communications; wherein the feature of the text content includes a word count or a character count of the text content). As shown in Fig. 1 (paragraph 18, In this portion of device speech 140, personal assistant device 120 outputs audio information in the form of natural language that greets user 110 by the user's name (i.e., “Sam”), identifies a quantity (i.e., “6”) of conversation threads that contain unreviewed electronic communications for the user, and identifies a duration of time (i.e., “about 5 minutes”) for the user to review the conversation threads through audible output of the contents of the electronic communications), the electronic communication comprises a first target character and second target character (start and end of communication) and punctation. Roche et al. and Niehaus et al. are in the same field of endeavor, namely computer graphics. Before the effective filing date of the claimed invention, it would have been obvious to apply the method wherein predicting the audio broadcasting duration from the first target character to the second target character comprises: predicting the audio broadcasting duration from the first target character to the second target character according to a number of characters and a number of punctuation marks between the first target character and the second target character, an audio broadcasting duration of a single character and a pause duration of each punctuation mark, as taught by Niehaus et al., to the Roche et al. system, because user 110 is informed by personal assistant device 120 as to the anticipated duration of an audio presentation of the unreviewed electronic communications prior to progressing through the audio presentation, thereby enabling the user to make informed decisions as to whether particular electronic communications should be reviewed or skipped (paragraph 18 of Niehaus et al.) and determine the estimated time to present the subsequent portion of the audio presentation based on a feature of the audio data; wherein the feature of the audio data includes an amount of the audio data or a duration of the audio data at a target presentation rate (paragraph 141 of Niehaus et al.). With respect to claim 16, Roche et al. as modified by Niehaus et al. disclose the apparatus according to claim 11 for executing the method of claim 3; see rationale for rejection of claim 3. Claim(s) 4 and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Roche et al. (U.S. Patent No. 10,521,946) in view of Kim et al. (U.S. Patent No. 12,205,577). With respect to claim 4, Roche et al. disclose the method according to claim 1. However, Roche et al. do not expressly disclose driving the virtual character in real time according to the audio information of the preset text and the respective target action file corresponding to each target keyword to generate the multimedia information comprises: if the time duration of the target action video corresponding to the target keyword is greater than the audio broadcasting duration, adjusting, according to the audio broadcasting duration, a time duration of driving the virtual character with the target action file corresponding to the target keyword, so that the time duration of driving the virtual character with the target action file is the same as the audio broadcasting duration; driving the virtual character in real time according to the audio information of the preset text, the respective target action file corresponding to each target keyword, and the time duration of driving the virtual character with each target action file, to generate the multimedia information. Kim et al., who also deal with animating a virtual character, disclose a method wherein driving the virtual character in real time according to the audio information of the preset text and the respective target action file corresponding to each target keyword to generate the multimedia information comprises: if the time duration of the target action video corresponding to the target keyword is greater than the audio broadcasting duration, adjusting, according to the audio broadcasting duration, a time duration of driving the virtual character with the target action file corresponding to the target keyword, so that the time duration of driving the virtual character with the target action file is the same as the audio broadcasting duration; driving the virtual character in real time according to the audio information of the preset text, the respective target action file corresponding to each target keyword, and the time duration of driving the virtual character with each target action file, to generate the multimedia information (column 19, lines 63-67, The response generator component 620 may generate the output video data and audio data to be commensurate in a time duration (i.e., a length of time of output of the video data corresponds to a length of time of output of the audio data). Thus, the device 110 may synchronize display of video and output of audio by commencing display of the video and output of the audio at the same time, column 21, lines 10-18, The 3D model may determine a viseme (i.e., a facial image used to describe a particular sound) for each sound represented in the natural language data, and may map each viseme to a 3D blendshape (used to deform a 3D shape to show different expression) with an emotion corresponding to the emotion identifier of the respective sound. The 3D model may transition between blendshapes smoothly as the avatar transitions to speak the natural language data). Roche et al. and Kim et al. are in the same field of endeavor, namely computer graphics. Before the effective filing date of the claimed invention, it would have been obvious to apply the method wherein driving the virtual character in real time according to the audio information of the preset text and the respective target action file corresponding to each target keyword to generate the multimedia information comprises: if the time duration of the target action video corresponding to the target keyword is greater than the audio broadcasting duration, adjusting, according to the audio broadcasting duration, a time duration of driving the virtual character with the target action file corresponding to the target keyword, so that the time duration of driving the virtual character with the target action file is the same as the audio broadcasting duration; driving the virtual character in real time according to the audio information of the preset text, the respective target action file corresponding to each target keyword, and the time duration of driving the virtual character with each target action file, to generate the multimedia information, as taught by Kim et al., to the Roche et al. system, because the system may synchronize display of the facial expressions of the avatar with output of the synthesized speech (“reading” the story) and display of the generated image. As such, it will be appreciated that the teachings herein provide an improved user experience (column 3, lines 9-13 of Kim et al.). With respect to claim 17, Roche et al. as modified by Kim et al. disclose the apparatus according to claim 11, wherein the processor is specifically configured to execute the method of claim 4; see rationale for rejection of claim 4. Claim(s) 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Roche et al. (U.S. Patent No. 10,521,946) in view of Marsella (U.S. PGPUB 20140267313). With respect to claim 10, Roche et al. disclose the method according to claim 1, wherein the method further comprises: acquiring a plurality of sample keywords in a sample text; adjusting the plurality of sample keywords to obtain a sample keyword set (column 24, lines 35-38, For example, the text "big" may be processed by the animation manager 204A and determined as a second keyword(s) 612 that is associated with a second skeletal animation sequence 614, column 24, lines 58-62, For example, the text "inside" may be processed by the animation manager 204A and determined as a third keyword(s) 620 that is associated with a third skeletal animation sequence 622); for each sample keyword in the sample keyword set, generating one or more preset action files corresponding to the sample keyword (column 24, lines 39-42, The second skeletal animation sequence 614 may cause animation of skeletal components of the avatar to move an arm near a head of the avatar and extend an index finger from a thumb to create a gesture associated with "big", column 24, lines 62-66, The third skeletal animation sequence 622 may cause animation of skeletal components of the avatar to extend an arm outward from a torso and point a finger 65 downwards toward the ground to create a gesture associated with "inside"). However, Roche et al. do not expressly disclose determining the target action file from the one or more preset action files corresponding to the target keyword comprises: determining a sample keyword matching the target keyword from the sample keyword set, and determining the target action file from one or more preset action files corresponding to the sample keyword. Marsella, who also deals with animating a virtual character, discloses a method wherein determining the target action file from the one or more preset action files corresponding to the target keyword comprises: determining a sample keyword matching the target keyword from the sample keyword set, and determining the target action file from one or more preset action files corresponding to the sample keyword (paragraph 6, a Behavior Expression Animation Toolkit (BEAT) analyzes the text that the virtual character is to speak with user provided rules to detect keywords and phrases. Then, the program automatically generates the speech part of a virtual character, and the associated nonverbal behavior and facial expression given raw text utterances). Roche et al. and Marsella are in the same field of endeavor, namely computer graphics. Before the effective filing date of the invention, it would have been obvious to apply the method wherein determining the target action file from the one or more preset action files corresponding to the target keyword comprises: determining a sample keyword matching the target keyword from the sample keyword set, and determining the target action file from one or more preset action files corresponding to the sample keyword, as taught by Marsella, to the Roche et al. system, because this would allow for using pre-generated animations, or action files. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANDREW GUS YANG whose telephone number is (571)272-5514. The examiner can normally be reached M-F 9 AM - 5:30 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kent Chang can be reached at (571)272-7667. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ANDREW G YANG/Primary Examiner, Art Unit 2614 1/7/26
Read full office action

Prosecution Timeline

Jul 19, 2024
Application Filed
Jan 07, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602856
DICING ORACLE FOR TEXTURE SPACE SHADING
2y 5m to grant Granted Apr 14, 2026
Patent 12602872
DRIVABLE IMPLICIT THREE-DIMENSIONAL HUMAN BODY REPRESENTATION METHOD
2y 5m to grant Granted Apr 14, 2026
Patent 12592023
INTERSECTION TESTING FOR RAY TRACING
2y 5m to grant Granted Mar 31, 2026
Patent 12579728
MEMORY ALLOCATION FOR RECURSIVE PROCESSING IN A RAY TRACING SYSTEM
2y 5m to grant Granted Mar 17, 2026
Patent 12567207
THREE-DIMENSIONAL MODELING AND RECONSTRUCTION OF CLOTHING
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
69%
Grant Probability
77%
With Interview (+8.3%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 558 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month