Prosecution Insights
Last updated: April 19, 2026
Application No. 18/900,323

METHOD FOR DETERMINING AN EFFECT VIDEO, ELECTRONIC DEVICE AND STORAGE MEDIUM

Final Rejection §103
Filed
Sep 27, 2024
Examiner
YANG, YI
Art Unit
2616
Tech Center
2600 — Communications
Assignee
Lemon Inc.
OA Round
4 (Final)
71%
Grant Probability
Favorable
5-6
OA Rounds
2y 9m
To Grant
88%
With Interview

Examiner Intelligence

Grants 71% — above average
71%
Career Allow Rate
295 granted / 415 resolved
+9.1% vs TC avg
Strong +17% interview lift
Without
With
+17.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
39 currently pending
Career history
454
Total Applications
across all art units

Statute-Specific Performance

§101
7.4%
-32.6% vs TC avg
§103
76.0%
+36.0% vs TC avg
§102
2.7%
-37.3% vs TC avg
§112
3.3%
-36.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 415 resolved cases

Office Action

§103
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment The Amendment filed on 12/30/2025 has been entered. Claims 1-20 remain pending in the application. Priority Receipt is acknowledged of certified copies of papers submitted under 35 U.S.C. 119(a)-(d), which papers have been placed of record in the file. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 1-9, 12-14 and 16-20 are rejected under 35 U.S.C. 103 as being unpatentable over Cordes U.S. Patent 10796489 in view of Yoon U.S. Patent Application 20150304603. Regarding claim 16, Cordes discloses an electronic device, comprising: at least one processor (Processing unit 3004); and at least one storage apparatus (Storage 3018) configured to store at least one program, wherein the at least one program upon being executed by the at least one processor, causes the at least one processor to implement a method for determining an effect video (col. 2 line 41-55: one or more processors and one or more memory devices comprising instructions that, when executed by the one or more processors, cause the one or more processors to perform operations… the virtual object in the 3-D virtual environment to transition to a second animation state in the animation graph, where the transition may be triggered by identifying the motion or position as the predefined motion or position), and the method comprises: in a capture page, in response to an effect triggering operation, adding a virtual effect model on a first object, wherein the first object is an object in a capture screen (col. 25 line 9-14: receiving a real-time motion or position of a performer in a first real-world environment (1302). The performer may include a motion-capture performer whose movements/position are captured by a motion-capture system. The motion or position of the performer may also be captured by a plurality of depth cameras; col. 10 line 48-58: if a new human character enters the first performance area 102, the volume of the new human character as approximated by the depth cameras 104 can be used to select a digital model of a human character from a content library that is the same or similar to the new human character in the first performance area. As the new human character moves throughout the first performance area 102, the model of the new human character selected from the content library can be moved throughout the 3-D virtual scene 202 based on the motion of the volume detected by the depth cameras 104; col. 33 line 64-col. 34 line 15: receiving a real-time motion or position of a performer in a first real-world environment (1804)… identifying the motion or position as a predefined motion or position (1806). Some embodiments may encode the motion or position as a 3-D motion-capture frame comprising vertices and/or wireframe representations of the performer; col. 12 line 45-54: FIG. 4 illustrates a resulting motion-capture output derived from the images captured by the motion-capture cameras 304, according to some embodiments. The motion-capture output can be used to generate a representation of the motion-capture performer 308 in the second performance area 302. In this example, each of the visual fiducials used to track the motion of the motion-capture performer 308 can be represented by a vertex, and each of the vertices can be connected to represent a skeleton or wireframe of the motion-capture performer 308; col. 13 line 21-37: recognizes predefined movements of the frame 404 and triggers the insertion and/or motion of predefined virtual assets in a 3-D environment accordingly. FIG. 5 illustrates a view of a 3-D environment 502 that includes virtual assets that are generated and/or controlled by movements of the motion-capture frame... in FIG. 5, the frame 404 has been replaced with a 3-D model of a clown 506. In some embodiments, the 3-D model of the clown 506 may include a full 3-D character model of a clown character with control points that are linked to the vertices on the frame 404); and controlling a first virtual part of the virtual effect model corresponding to the first object to move, and adjusting a first entity part of the first object to a preset state to obtain a first effect video (col. 33 line 53-54: governing virtual animations by identifying predefined motions/positions of a motion-capture performer; col. 34 line 57-63: causing the virtual object in the 3-D virtual environment to transition to a second animation state in the animation graph (1810). In some embodiments, the transition may be triggered by identifying the motion or position as the predefined motion or position. The second animation state may include causing the virtual object to appear or a virtual effect to execute; col. 13 line 21-37: recognizes predefined movements of the frame 404 and triggers the insertion and/or motion of predefined virtual assets in a 3-D environment accordingly. FIG. 5 illustrates a view of a 3-D environment 502 that includes virtual assets that are generated and/or controlled by movements of the motion-capture frame... in FIG. 5, the frame 404 has been replaced with a 3-D model of a clown 506. In some embodiments, the 3-D model of the clown 506 may include a full 3-D character model of a clown character with control points that are linked to the vertices on the frame 404). Cordes discloses all the features with respect to claim 16 as outlined above. Cordes further discloses user interface input devices may include a keyboard, pointing devices such as a mouse or trackball… audio input devices with voice command recognition systems, microphones, and other types of input devices (col. 46 line 60-67). However, Cordes fails to disclose audio information collected, wherein the first entity part comprises at least one selected from a group consisting of an eye part. an ear part, an eyebrow part, a nose part and a mouth part of the first object. Yoon discloses based on audio information collected, controlling a first virtual part to move, wherein the first entity part comprises at least one selected from a group consisting of an eye part, an ear part, an eyebrow part, a nose part and a mouth part of the first object (paragraph [0072]: The image processing module 260 may process the images of the counterpart user to express the analyzed counterpart user's emotion. For example, the image processing module 260 analyzes the voice and expression data and, when it is determined that the counterpart user is happy, change the shapes of the eyes and sides of the mouth of the counterpart user's face to emulate smiling. If it is determined that the counterpart user is speaking based on the voice and expression data, the image processing module 260 animates the mouth region to emulate talking on the counterpart user's image; Yoon’s teaching of changing expression based on voice can be combined with Cordes’ device, such that to control virtual effect model expression based on voice). Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Cordes’ to change expression based on voice as taught by Yoon, to provide user with reliable communication when errors occur. Regarding claim 17, Cordes as modified by Yoon discloses the electronic device according to claim 16, wherein the effect triggering operation comprises at least one of: detecting that the first object is included in the capture screen; detecting that a control corresponding to a first effect is triggered; detecting that a body movement of the first object in the capture screen is consistent with a body movement for adding an effect; or detecting that voice information triggers a wake-up word for adding an effect (Cordes’ col. 34 line 60-61: the transition may be triggered by identifying the motion or position as the predefined motion or position). Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Cordes’ to change expression based on voice as taught by Yoon, to provide user with reliable communication when errors occur. Regarding claim 18, Cordes as modified by Yoon discloses the electronic device according to claim 16, before the adding the virtual effect model on a first object, further comprising: determining a virtual effect model corresponding to the first object from a virtual effect library (Cordes’ col. 10 line 48-58: if a new human character enters the first performance area 102, the volume of the new human character as approximated by the depth cameras 104 can be used to select a digital model of a human character from a content library that is the same or similar to the new human character in the first performance area. As the new human character moves throughout the first performance area 102, the model of the new human character selected from the content library can be moved throughout the 3-D virtual scene 202 based on the motion of the volume detected by the depth cameras 104), wherein the virtual effect library comprises at least one virtual effect model to be selected (Cordes’ col. 16 line 51-55: these 3-D models 210, 212 may be automatically selected by matching virtual assets from a content library or virtual asset data store that match within a predetermined threshold the volumetric model created by the depth cameras 104). Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Cordes’ to change expression based on voice as taught by Yoon, to provide user with reliable communication when errors occur. Regarding claim 19, Cordes as modified by Yoon discloses the electronic device according to claim 18, wherein the determining the virtual effect model corresponding to the first object from the virtual effect library comprises: based on a basic attribute information of the first object, determining the virtual effect model corresponding to the first object (Cordes’ col. 10 line 48-58: if a new human character enters the first performance area 102, the volume of the new human character as approximated by the depth cameras 104 can be used to select a digital model of a human character from a content library that is the same or similar to the new human character in the first performance area. As the new human character moves throughout the first performance area 102, the model of the new human character selected from the content library can be moved throughout the 3-D virtual scene 202 based on the motion of the volume detected by the depth cameras 104; Cordes’ col. 16 line 51-55: these 3-D models 210, 212 may be automatically selected by matching virtual assets from a content library or virtual asset data store that match within a predetermined threshold the volumetric model created by the depth cameras 104). Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Cordes’ to change expression based on voice as taught by Yoon, to provide user with reliable communication when errors occur. Claim 1 recites the functions of the apparatus recited in claim 16 as method steps. Accordingly, the mapping of the prior art to the corresponding functions of the apparatus in claim 16 applies to the method steps of claim 1. Claim 2 recites the functions of the apparatus recited in claim 17 as method steps. Accordingly, the mapping of the prior art to the corresponding functions of the apparatus in claim 17 applies to the method steps of claim 2. Claim 3 recites the functions of the apparatus recited in claim 18 as method steps. Accordingly, the mapping of the prior art to the corresponding functions of the apparatus in claim 18 applies to the method steps of claim 3. Claim 4 recites the functions of the apparatus recited in claim 19 as method steps. Accordingly, the mapping of the prior art to the corresponding functions of the apparatus in claim 19 applies to the method steps of claim 4. Regarding claim 5, Cordes as modified by Yoon discloses the method according to claim 1, before the adding the virtual effect model on the first object, further comprising: acquiring a to-be-processed image which is pre-uploaded, and determining the virtual effect model based on at least one display object in the to-be-processed image (Cordes’ col. 33 line 55-63: causing a virtual object in the 3-D virtual environment to be in a first animation state in animation graph (1802)... the first animation state may include a state wherein the object is not yet active or visible in the 3-D virtual scene; col. 10 line 48-58: if a new human character enters the first performance area 102, the volume of the new human character as approximated by the depth cameras 104 can be used to select a digital model of a human character from a content library that is the same or similar to the new human character in the first performance area. As the new human character moves throughout the first performance area 102, the model of the new human character selected from the content library can be moved throughout the 3-D virtual scene 202 based on the motion of the volume detected by the depth cameras 104). Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Cordes’ to use change expression based on voice as taught by Yoon, to provide user with reliable communication when errors occur. Regarding claim 6, Cordes as modified by Yoon discloses the method according to claim 1, wherein a number of the first objects comprises at least one, and the adding the virtual effect model on the first object comprises: respectively adding a corresponding virtual effect model on each of the at least one first object; or adding a virtual effect model on a first object meeting a preset condition among the at least one first object; and determining a first object corresponding to the audio information, adding the virtual effect model on the first object, and transparently displaying virtual effect models corresponding to other first objects among the at least one first object (Cordes’ col. 34 line 60-61: the transition may be triggered by identifying the motion or position as the predefined motion or position; col. 10 line 48-58: if a new human character enters the first performance area 102, the volume of the new human character as approximated by the depth cameras 104 can be used to select a digital model of a human character from a content library that is the same or similar to the new human character in the first performance area. As the new human character moves throughout the first performance area 102, the model of the new human character selected from the content library can be moved throughout the 3-D virtual scene 202 based on the motion of the volume detected by the depth cameras 104; col. 13 line 21-37: recognizes predefined movements of the frame 404 and triggers the insertion and/or motion of predefined virtual assets in a 3-D environment accordingly. FIG. 5 illustrates a view of a 3-D environment 502 that includes virtual assets that are generated and/or controlled by movements of the motion-capture frame... in FIG. 5, the frame 404 has been replaced with a 3-D model of a clown 506. In some embodiments, the 3-D model of the clown 506 may include a full 3-D character model of a clown character with control points that are linked to the vertices on the frame 404). Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Cordes’ to change expression based on voice as taught by Yoon, to provide user with reliable communication when errors occur. Regarding claim 7, Cordes as modified by Yoon discloses the method according to claim 1, wherein the adding the virtual effect model on the target object comprises: determining an effect adding position corresponding to the target object and adding the virtual effect model to the effect adding position, wherein the effect adding position comprises any position on a limb and a trunk of the target object or any position in a preset neighborhood centered on the target object (Cordes’ col. 31 line 12-15: FIG. 15C illustrates an example of virtual effects that can be triggered based on identifying predefined motions and/or positions; col. 34 line 60-61: the transition may be triggered by identifying the motion or position as the predefined motion or position; see fig. 15C the effect adding position is on arms and legs and trunk of the virtual character). Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Cordes’ to change expression based on voice as taught by Yoon, to provide user with reliable communication when errors occur. Regarding claim 8, Cordes as modified by Yoon discloses the method according to claim 7, wherein the determining an effect adding position corresponding to the first object and adding the virtual effect model to the effect adding position, comprises: determining hand information of the first object, and determining that the effect adding position is a hand part in response to the hand information being consistent with preset hand posture information; and adding the virtual effect model at the hand part (Cordes’ col. 34 line 60-61: the transition may be triggered by identifying the motion or position as the predefined motion or position; col. 12 line 62-65: when the motion-capture performer 308 begins making a juggling motion with his hands and arms, the frame 404 may begin moving its “hands and arms” in a corresponding fashion). Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Cordes’ to change expression based on voice as taught by Yoon, to provide user with reliable communication when errors occur. Regarding claim 9, Cordes as modified by Yoon discloses the method according to claim 1, wherein the based on audio information collected, controlling a first virtual part of a virtual effect model corresponding to the first object to move, comprises: in response to the audio information being collected, controlling the first virtual part of the virtual effect model corresponding to the first object to move; or in response to the audio information being detected to comprise triggering motion content, controlling the first virtual part of the virtual effect model corresponding to the first object to move (Yoon’s paragraph [0072]: The image processing module 260 may process the images of the counterpart user to express the analyzed counterpart user's emotion. For example, the image processing module 260 analyzes the voice and expression data and, when it is determined that the counterpart user is happy, change the shapes of the eyes and sides of the mouth of the counterpart user's face to emulate smiling. If it is determined that the counterpart user is speaking based on the voice and expression data, the image processing module 260 animates the mouth region to emulate talking on the counterpart user's image; Cordes’ col. 46 line 60-67: User interface input devices may include a keyboard, pointing devices such as a mouse or trackball… audio input devices with voice command recognition systems, microphones, and other types of input devices). Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Cordes’ to change expression based on voice as taught by Yoon, to provide user with reliable communication when errors occur. Regarding claim 12, Cordes as modified by Yoon discloses the method according to claim 1, wherein the virtual effect model is located at a hand part of the first object, and the controlling the first virtual part of the virtual effect model corresponding to the first object to move comprises: acquiring motion information of the hand part, and controlling the first virtual part to move based on the motion information (Cordes’ col. 34 line 60-61: the transition may be triggered by identifying the motion or position as the predefined motion or position col. 12 line 62-65: when the motion-capture performer 308 begins making a juggling motion with his hands and arms, the frame 404 may begin moving its “hands and arms” in a corresponding fashion). Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Cordes’ to change expression based on voice as taught by Yoon, to provide user with reliable communication when errors occur. Regarding claim 13, Cordes as modified by Yoon discloses the method according to claim 12, further comprising: in response to not obtaining the motion information of the hand part, controlling the first virtual part to move based on the audio information, wherein motion information of the first virtual part corresponds to mouth information of the audio information (Yoon’s paragraph [0072]: The image processing module 260 may process the images of the counterpart user to express the analyzed counterpart user's emotion. For example, the image processing module 260 analyzes the voice and expression data and, when it is determined that the counterpart user is happy, change the shapes of the eyes and sides of the mouth of the counterpart user's face to emulate smiling. If it is determined that the counterpart user is speaking based on the voice and expression data, the image processing module 260 animates the mouth region to emulate talking on the counterpart user's image). Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Cordes’ to change expression based on voice as taught by Yoon, to provide user with reliable communication when errors occur. Regarding claim 14, Cordes as modified by Yoon discloses the method according to claim 1, wherein the adjusting the first entity part of the first object to the preset state comprises: based on an entity part adjustment model corresponding to the preset state, adjusting the first entity part from a first state to the preset state (Cordes’ col. 34 line 57-63: causing the virtual object in the 3-D virtual environment to transition to a second animation state in the animation graph (1810). In some embodiments, the transition may be triggered by identifying the motion or position as the predefined motion or position. The second animation state may include causing the virtual object to appear or a virtual effect to execute). Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Cordes’ to change expression based on voice as taught by Yoon, to provide user with reliable communication when errors occur. Claim 20 recites the functions of the apparatus recited in claim 16 as medium steps. Accordingly, the mapping of the prior art to the corresponding functions of the apparatus in claim 16 applies to the medium steps of claim 20. Claim 10-11 are rejected under 35 U.S.C. 103 as being unpatentable over Cordes U.S. Patent 10796489 in view of Yoon U.S. Patent Application 20150304603, and further in view of Hussen U.S. Patent Application 20210090314. Regarding claim 11, Cordes as modified by Yoon discloses controlling the first virtual part of the virtual effect model corresponding to the first object to move (Cordes’ col. 10 line 48-58: if a new human character enters the first performance area 102, the volume of the new human character as approximated by the depth cameras 104 can be used to select a digital model of a human character from a content library that is the same or similar to the new human character in the first performance area. As the new human character moves throughout the first performance area 102, the model of the new human character selected from the content library can be moved throughout the 3-D virtual scene 202 based on the motion of the volume detected by the depth cameras 104). However, Cordes as modified by Yoon fails to disclose acquiring mouth shape information and controlling the first virtual part to move based on the mouth shape information, wherein motion information of the first virtual part is consistent with the mouth shape information. Hussen discloses acquiring mouth shape information and controlling the first virtual part to move based on the mouth shape information, wherein motion information of the first virtual part is consistent with the mouth shape information (paragraph [0299]: characteristic generator 840 may generate a first portion of the set of characteristics 842 for animating the lips of avatar 916 based on data set 822 and data set 832 because audio input 802 and video input 804 provide details of how user 910's mouth moves). Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Cordes and Yoon’s to control mouth shape as taught by Hussen, to generate avatar that accurately represent the user. Regarding claim 10, Cordes as modified by Yoon and Hussen discloses the method according to claim 1, wherein the controlling the first virtual part of the virtual effect model corresponding to the first object to move comprises: hiding the first virtual part from display, and migrating the first entity part to the first virtual part based on a pre-trained entity part migration model (Hussen’s paragraph [0268]: in FIG. 9B, a portion 916 of user 910's face in video input 804 may be at least partially occluded by, for example, the user's hand, another user's hand, a shadow, etc; Cordes’ col. 30 line 65-67: use the movements of the motion-capture performer 308 to train the game engine to recognize new predefined motions and/or positions; col. 2 line 44-47: the depth cameras 104 can create a real-time model of moving and stationary objects in the performance area that can be used in a 3-D virtual scene before adding additional virtual assets), wherein motion information of the first virtual part corresponds to mouth shape information of the first entity part (Hussen’s paragraph [0299]: characteristic generator 840 may generate a first portion of the set of characteristics 842 for animating the lips of avatar 916 based on data set 822 and data set 832 because audio input 802 and video input 804 provide details of how user 910's mouth moves). Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Cordes and Yoon’s to control mouth shape as taught by Hussen, to generate avatar that accurately represent the user. Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over Cordes U.S. Patent 10796489 in view of Yoon U.S. Patent Application 20150304603, in view of Hussen U.S. Patent Application 20210090314, and further in view of Cheung U.S. Patent Application 20130226587. Regarding claim 15, Cordes as modified by Yoon and Hussen discloses the first entity part is a mouth part of the first object, the first virtual part is a mouth part of the virtual effect (Hussen’s paragraph [0299]: characteristic generator 840 may generate a first portion of the set of characteristics 842 for animating the lips of avatar 916 based on data set 822 and data set 832 because audio input 802 and video input 804 provide details of how user 910's mouth moves). However, Cordes as modified by Yoon and Hussen fails to disclose the preset state is a closed state or a smiling state of the mouth part. Cheung discloses the preset state is a closed state or a smiling state of the mouth part (paragraph [0058]: The position with minimum mouth point always represents the status of mouth closing or intersection point between subunit utterances). Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Cordes, Yoon and Hussen’s to use mouth closed state as taught by Cheung, to segment lip motion effectively. Response to Arguments Applicant's arguments filed 12/30/2025, page 8 - 10, with respect to the rejection(s) of claim(s) 1 under 103, have been fully considered and are moot upon a new ground(s) of rejection made under 35 U.S.C. 103 as being unpatentable over Cordes U.S. Patent 10796489 in view of Yoon U.S. Patent Application 20150304603, as outlined above. Applicant argues on page 8-9 that The wireframe representation referenced in Cordes, such as in Fig. 4 (shown below), functions as an intermediate data structure (a skeletal structure) used for constructing a virtual avatar. It is not a visual effect model added onto a real time camera feed or capture screen that contains imagery of an actual physical object; and the three-dimensional clown model 506 in Cordes is a virtual representation that replaces the motion capture performer 308 in the 3D virtual environment 502. This replacement occurs in a virtual environment and does not involve superimposing the clown model onto the original camera captured image of the real performer. In reply, Cordes col. 12 line 45-54: FIG. 4 illustrates a resulting motion-capture output derived from the images captured by the motion-capture cameras 304, according to some embodiments. The motion-capture output can be used to generate a representation of the motion-capture performer 308 in the second performance area 302. In this example, each of the visual fiducials used to track the motion of the motion-capture performer 308 can be represented by a vertex, and each of the vertices can be connected to represent a skeleton or wireframe of the motion-capture performer 308; col. 13 line 21-37: recognizes predefined movements of the frame 404 and triggers the insertion and/or motion of predefined virtual assets in a 3-D environment accordingly. FIG. 5 illustrates a view of a 3-D environment 502 that includes virtual assets that are generated and/or controlled by movements of the motion-capture frame... in FIG. 5, the frame 404 has been replaced with a 3-D model of a clown 506. In some embodiments, the 3-D model of the clown 506 may include a full 3-D character model of a clown character with control points that are linked to the vertices on the frame 404. Fig. 4 is an image captured by motion-capture cameras 304, three-dimensional clown model 506 is the virtual effect model to replace the wireframe image. The applicant can further specify independent claims to differentiate from cited prior art. Applicant argues on page 9-10 that Cordes operates exclusively within a synthetic virtual space. It does not involve modifying or adjusting any real world entity parts, such as facial features including eyes, ears, eyebrows, nose, or mouth, of an actual object present in a camera captured image or video stream. In reply, the rejection is based on Cordes and Yoon combined. Cordes discloses capture real world image. Yoon discloses based on audio information collected, controlling a first virtual part to move, wherein the first entity part comprises at least one selected from a group consisting of an eye part, an ear part, an eyebrow part, a nose part and a mouth part of the first object (paragraph [0072]: The image processing module 260 may process the images of the counterpart user to express the analyzed counterpart user's emotion. For example, the image processing module 260 analyzes the voice and expression data and, when it is determined that the counterpart user is happy, change the shapes of the eyes and sides of the mouth of the counterpart user's face to emulate smiling. If it is determined that the counterpart user is speaking based on the voice and expression data, the image processing module 260 animates the mouth region to emulate talking on the counterpart user's image; Yoon’s teaching of changing expression based on voice can be combined with Cordes’ device, such that to control virtual effect model expression based on voice). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Yi Yang whose telephone number is (571)272-9589. The examiner can normally be reached on Monday-Friday 9:00 AM-6:00 PM EST. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel Hajnik can be reached on 571-272-7642. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). /YI YANG/ Primary Examiner, Art Unit 2616
Read full office action

Prosecution Timeline

Sep 27, 2024
Application Filed
Nov 25, 2024
Non-Final Rejection — §103
Feb 28, 2025
Response Filed
Mar 11, 2025
Final Rejection — §103
May 19, 2025
Response after Non-Final Action
Jul 28, 2025
Request for Continued Examination
Jul 30, 2025
Response after Non-Final Action
Sep 25, 2025
Non-Final Rejection — §103
Dec 30, 2025
Response Filed
Jan 30, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586304
PROGRAM, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING SYSTEM
2y 5m to grant Granted Mar 24, 2026
Patent 12567129
Image Processing Method and Electronic Device
2y 5m to grant Granted Mar 03, 2026
Patent 12561276
SYSTEMS AND METHODS FOR UPDATING MEMORY SIDE CACHES IN A MULTI-GPU CONFIGURATION
2y 5m to grant Granted Feb 24, 2026
Patent 12541902
SIGN LANGUAGE GENERATION AND DISPLAY
2y 5m to grant Granted Feb 03, 2026
Patent 12541896
COMPUTER-BASED CONTENT PERSONALIZATION OF A VISUAL DISPLAY
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
71%
Grant Probability
88%
With Interview (+17.2%)
2y 9m
Median Time to Grant
High
PTA Risk
Based on 415 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month