Prosecution Insights
Last updated: April 19, 2026
Application No. 18/610,787

SYSTEM AND METHOD FOR GENERATING AND INTERACTING WITH CONVERSATIONAL THREE-DIMENSIONAL SUBJECTS

Non-Final OA §103
Filed
Mar 20, 2024
Examiner
TRUONG, KARL DUC
Art Unit
2614
Tech Center
2600 — Communications
Assignee
Looking Glass Factory Inc.
OA Round
1 (Non-Final)
52%
Grant Probability
Moderate
1-2
OA Rounds
2y 7m
To Grant
83%
With Interview

Examiner Intelligence

Grants 52% of resolved cases
52%
Career Allow Rate
15 granted / 29 resolved
-10.3% vs TC avg
Strong +31% interview lift
Without
With
+31.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
45 currently pending
Career history
74
Total Applications
across all art units

Statute-Specific Performance

§101
3.2%
-36.8% vs TC avg
§103
85.3%
+45.3% vs TC avg
§102
9.5%
-30.5% vs TC avg
§112
2.1%
-37.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 29 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Claims 5 and 16 are objected to because of the following informalities: Claim 5 recites the limitation(s): “provide the test to a second LLM…” on PG(s). 2, Line(s) 5; examiner suggests amending this to “provide a test to a second LLM…”; and Claim 16 recites the limitation(s): “after the response is generated the input audio is removed…” on PG(s). 4-5, Line(s) 17 and 1 respectively; examiner suggests amending this to “after the response is generated, the input audio is removed…”Appropriate correction is required. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-4, 7, 10-12, 14-15, and 18-19 are rejected under 35 U.S.C. 103 as being unpatentable over Breazeal et al. (US 20180133900 A1), hereinafter referenced as Breazeal, in view of Doyen et al. (US 20180124373 A1), hereinafter referenced as Doyen. Regarding Claim 1, Breazeal discloses a system (Breazeal, FIG. 1A teaches a social robot <read on system>) comprising: PNG media_image1.png 385 435 media_image1.png Greyscale a [[three-dimensional]] display configured to project light that is perceivable as a three-dimensional image (Breazeal, FIG. 1A teaches the social robot including a display/face, which is connected <read on project light> to imagery ES132 of output ES130; [0180]: teaches a simulated, animated 3D sphere <read on 3D image> representing the "eye" of the social robot being displayed; [0178]: teaches display screen animations including eye animations); a microphone configured to receive an input audio signal (Breazeal, FIG. 1A teaches the social robot including a microphone to receive input audio); a speaker configured to output an output audio signal (Breazeal, FIG. 1A teaches the social robot including a speaker connected to output ES130 for outputting audio sounds ES138 <read on output audio signal>); and a processor configured to:convert the input audio signal into a text (Breazeal, [0063]: teaches a perception subsystem ES102 including an automated speech recognition facility (ASR) ES118 that processes detected speech into structured data that represents words); determine a response to the text using a large language model (LLM) (Breazeal, [0066]: teaches the processed text being sent to a Macro-Level Behavior (MLB) module ES120, where it uses a natural language understanding (NLU) facility ES122 <read on LLM>; [0073]: teaches using the embodied speech facility ES128 to generate a structured response based on processed text from ASR ES118; Note: it should be noted that an LLM is a subset of Natural Language Processing (NLP) in the art), wherein the output audio signal comprises the response (Breazeal, [0078]: teaches outputting embodied speech <read on output audio signal> based on a Multi-Interaction Module (MIM) data structure which uses the structured response from ASR ES118); and modify the three-dimensional image based on at least one of the input audio signal and the response (Breazeal, [0082]: teaches the social robot outputting an embodied speech response "Happy to see you, NAME", where a happy animation is triggered <read on modify 3D image> after user interaction <read on input audio signal>). However, Breazeal does not expressly disclose a three-dimensional display configured to project light that is perceivable as a three-dimensional image. Doyen discloses a three-dimensional display configured to project light that is perceivable as a three-dimensional image (Doyen, [0026]: teaches an auto-stereoscopic display <read on 3D display> displaying multi-view images <read on 3D image>, where each multi-view image comprises a pair of views associated with a viewing location in front of the auto-stereoscopic display). Doyen is analogous art with respect to Breazeal because they are from the same field of endeavor, namely displaying content on viewable screens to users. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to implement an auto-stereoscopic display that displays multi-view images of a virtual target as taught by Doyen into the teaching of Breazeal. The suggestion for doing so would allow the display on the social robot to express more immersive emotions, thereby improving the overall user experience. Therefore, it would have been obvious to combine Doyen with Breazeal. Regarding Claim 12, it recites the limitations that are similar in scope to Claim 1, but in a method. As shown in the rejection, the combination of Breazeal and Doyen discloses the limitations of Claim 1. Additionally, Breazeal discloses a method (Breazeal, [0215]: teaches a method for a social robot using paralinguistic non-speech and/or spoken language communication in a variety of contexts and combinations) comprising:… concurrently with outputting the response, changing the displayed three-dimensional subject to the modified appearance of the three-dimensional subject (Breazeal, [0100]: teaches the social robot engaging in multi-modal expression when producing speech audio <read on concurrently outputting response>, where the social robot conveys "emotion, character traits, intention, semantic meaning, and a wide range of multi-modal expressions"; [0108]: teaches on-screen graphics/animations corresponding to categories/types of emotional expressions). Thus, Claim 12 is met by Breazeal according to the mapping presented in the rejection of Claim 1, given the system corresponds to a method. Regarding Claim 2, the combination of Breazeal and Doyen discloses the system of Claim 1. Additionally, Breazeal further discloses wherein the three-dimensional image comprises a character (Breazeal, [0180]: teaches a 3D sphere <read on character> being displayed on screen, which represents the "eye" of the social robot), wherein the character is associated with a personality (Breazeal, [0100]: teaches the social robot conveying emotion, character traits <read on personality>, intention, semantic meaning, and a wide range of multi-modal expressions). Regarding Claim 3, the combination of Breazeal and Doyen discloses the system of Claim 2. Additionally, Breazeal further discloses wherein the processor is further configured to generate the response based on the personality of the character (Breazeal, [0100]: teaches the social robot engaging in multi-modal expression when producing speech audio <read on generate response>, where the social robot conveys "emotion, character traits <read on personality>, intention, semantic meaning, and a wide range of multi-modal expressions"). Regarding Claim 4, the combination of Breazeal and Doyen discloses the system of Claim 2. Additionally, Breazeal further discloses wherein modifying the three-dimensional image comprises modifying the character to an appearance associated with a delay when determining a response requires greater than a threshold time (Breazeal, [0179]: teaches triggering a graphical long blink eye animation <read on appearance associated with delay> of a 3D sphere <read on character> to indicate confusion or high cognitive load being processed <read on determining response exceeds threshold time> by a social robot; Note: it should be noted that Paragraph [0031] of the current application states that the system can be integrated into an external system, such as a system containing a display being mounted to a face or head region of a robot, allowing a robot to act as a chatbot (and/or be converted to enabling interfacing with a human depending on an operation mode)). Regarding Claim 7, the combination of Breazeal and Doyen discloses the system of Claim 1. Additionally, Breazeal further discloses wherein the processor is further configured to generate a second output audio signal without receiving an input audio signal (Breazeal, [0225]: teaches the social robot determining that a period of time has passed within which it is expected that user would respond to a text-to-speech audio request, where the social robot may inquire again using a paralinguistic audio prompt <read on generating second output audio signal without receiving input audio signal>). Regarding Claim 10, the combination of Breazeal and Doyen discloses the system of Claim 1. Breazeal does not expressly disclose the limitations of Claim 10; however, Doyen discloses wherein the display comprises an autostereoscopic display configured to output a plurality of images comprising different perspectives of a common subject in the three-dimensional image (Doyen, [0052]: teaches an "auto-stereoscopic display is configured to display multi-view images <read on images of common subject> comprising n views <read on different perspectives of common subject> forming n - 1 successive stereoscopic pairs of views ( n being an integer strictly greater than two)" as shown in FIG. 2), wherein PNG media_image2.png 406 303 media_image2.png Greyscale each image of the plurality of images is output in a different viewing direction (Doyen, [0052]: teaches an "auto-stereoscopic display is configured to display multi-view images comprising n views <read on different viewing directions> forming n−1 successive stereoscopic pairs of views (n being an integer strictly greater than two)" as shown in FIG. 2). Doyen is analogous art with respect to Breazeal because they are from the same field of endeavor, namely displaying content on viewable screens to users. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to implement an auto-stereoscopic display that displays multi-view images of a virtual target as taught by Doyen into the teaching of Breazeal. The suggestion for doing so would allow the display on the social robot to express more immersive emotions, thereby improving the overall user experience. Therefore, it would have been obvious to combine Doyen with Breazeal. Regarding Claim 11, the combination of Breazeal and Doyen discloses the system of Claim 10. Breazeal does not expressly disclose the limitations of Claim 11; however, Doyen discloses wherein the autostereoscopic display comprises: a screen configured to output the light (Doyen, [0058]: teaches an auto-stereoscopic display <read on screen> to display multi-view images <read on output light>); a lenticular array overlaid on the screen (Doyen, [0058]: teaches the auto-stereoscopic display consisting of a lenticular array), wherein the lenticular array is oriented at an angle relative to pixels of the screen (Doyen, [0058]: teaches the same views are repeated at predetermined intervals <read on lenticular array being oriented relative to pixels of screen> due to optical properties of the lenticular array of the auto-stereoscopic display); and wherein the processor is further configured to generate a lightfield image comprising the common subject by assigning pixels of the screen to pixels of the image based on the different viewing direction and the angle (Doyen, [0058]: teaches generating a multi-view image <read on lightfield image> to be displayed based on determining the current viewpoint associated with the position of the user, where the multi-view image consists of pixels of the current viewpoint). Doyen is analogous art with respect to Breazeal because they are from the same field of endeavor, namely displaying content on viewable screens to users. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to implement an auto-stereoscopic display that displays multi-view images of a virtual target as taught by Doyen into the teaching of Breazeal. The suggestion for doing so would allow the display on the social robot to express more immersive emotions, thereby improving the overall user experience. Therefore, it would have been obvious to combine Doyen with Breazeal. Regarding Claim 14, the combination of Breazeal and Doyen discloses the method of Claim 12. Additionally, Breazeal further discloses wherein modifying the appearance comprises: detecting a trigger word or phrase in the input audio (Breazeal, [0066]: teaches the embodied listen facility ES126 interacting with ASR ES118 facility of perception subsystem ES102 "to capture responses <read on input audio> to the social robot's questions to a human and words spoken by the human that relates to a keyword <read on trigger word> or phrase <read on trigger phrase>, such as 'Hey Jibo'"); and adding a secret based on the trigger word or phrase (Breazeal, [PG. 32, Table 1]: teaches a list of paralinguistic emotive states, where based on the emotive state, situational context (i.e., a user request <read on trigger word/phrase>), and the personality of the social robot, the output audio will contain inserted emotions <read on secret>, such as giggling when telling a joke or saying "uh-oh" after making a mistake; Note: it should be noted that "secret" is mentioned a single time without elaborating what it is and what it does; therefore, BRI is used). PNG media_image3.png 556 643 media_image3.png Greyscale Regarding Claim 15, the combination of Breazeal and Doyen discloses the method of Claim 12. Additionally, Breazeal further discloses wherein the response is further generated based on a historical record of prior input audio and prior responses (Breazeal, [0228]: teaches the social robot utilizing a history of past utterances <read on historical record of prior input audio and prior responses> of the user to engage in back and forth communication). Regarding Claim 18, the combination of Breazeal and Doyen discloses the method of Claim 12. Additionally, Breazeal further discloses wherein the three-dimensional subject is associated with a personality (Breazeal, [0100]: teaches the social robot conveying emotion, character traits <read on personality>, intention, semantic meaning, and a wide range of multi-modal expressions), wherein the response is further generated based on the personality (Breazeal, [0100]: teaches the social robot engaging in multi-modal expression when producing speech audio <read on generate response>, where the social robot conveys "emotion, character traits <read on personality>, intention, semantic meaning, and a wide range of multi-modal expressions"). Regarding Claim 19, the combination of Breazeal and Doyen discloses the method of Claim 18. Additionally, Breazeal further discloses wherein audio characteristics of the output audio depend on the personality (Breazeal, [0100]: teaches the social robot producing speech audio <read on output audio> with expressive attributes <read on audio characteristics>, where the audio includes "emotion, character traits <read on personality>, intention, semantic meaning, and a wide range of multi-modal expressions"). Claims 5, 8-9, and 16-17 are rejected under 35 U.S.C. 103 as being unpatentable over Breazeal et al. (US 20180133900 A1), hereinafter referenced as Breazeal, in view of Doyen et al. (US 20180124373 A1), hereinafter referenced as Doyen as applied to Claims 4, 1, and 15 above respectively, and further in view of Creswell et al. ("Selection-Inference: Exploiting Large Language Models for Interpretable Logical Reasoning"), hereinafter referenced as Creswell. Regarding Claims 5 and 17, the combination of Breazeal and Doyen discloses the system and the method of Claims 4 and 15 respectively. The combination of Breazeal and Doyen does not expressly disclose the limitations of Claims 5 and 17; however, Creswell discloses wherein the delay exceeds the threshold time the processor is further configured to provide the test to a second LLM to determine a second response (Creswell, FIG. 1 teaches a Selection-Inference (SI) framework <read on second LLM>, which uses a vanilla LLM with a Chain-of-Thought (COT) approach, to output a correct selection prompt <read on second response> based on provided context information <read on test>; Note: it should be noted that "test" is being interpreted as "historical record"), wherein PNG media_image4.png 449 1076 media_image4.png Greyscale the response comprises the first completed of the response determined by the LLM and the second response (Creswell, FIG. 2 teaches examples of correct selection prompts <read on second response> from the SI model <read on LLM>, where the SI model recovers from errors to justify ambiguous answers with a reasoning trace, where the reasoning trace is based on the context and the question <read on first completed response>). PNG media_image5.png 396 1217 media_image5.png Greyscale Creswell is analogous art with respect to Breazeal, in view of Doyen because they are from the same field of endeavor, namely utilizing machine learning to process audio input data. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to implement a large language model (LLM) that incorporates a Chain-of-Thought (COT) framework as taught by Creswell into the teaching of Breazeal, in view of Doyen. The suggestion for doing so would allow system such as the social robot to use deep reasoning when answering user queries, thereby improving both the overall user experience and overall functional capabilities of the assistant. Therefore, it would have been obvious to combine Creswell with Breazeal, in view of Doyen. Regarding Claim 8, the combination of Breazeal and Doyen discloses the system of Claim 1. Additionally, Breazeal further discloses wherein the processor is further configured to store a historical record of prior input audio signals and prior responses (Breazeal, [0228]: teaches the social robot utilizing a history of past utterances <read on storing historical record of prior input audio and prior responses> of the user to engage in back and forth communication), wherein the LLM determines the response based on the historical record and the text (Breazeal, [0228]: teaches the social robot utilizing the NLU 122 to generate a response based on a history of past utterances <read on historical record and text> of the user to engage in back and forth communication), wherein [[the text is added to the historical record before an earliest record in the historical record.]] However, the combination of Breazeal and Doyen does not expressly disclose the text is added to the historical record before an earliest record in the historical record. Creswell discloses the text is added to the historical record before an earliest record in the historical record (Creswell, [Section 4.1 Selection Module]: teaches an overview of the n-shot prompt, where the question and context is added first <read on before earliest record> in the prompt). Creswell is analogous art with respect to Breazeal, in view of Doyen because they are from the same field of endeavor, namely utilizing machine learning to process audio input data. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to implement a large language model (LLM) that incorporates a Chain-of-Thought (COT) framework as taught by Creswell into the teaching of Breazeal, in view of Doyen. The suggestion for doing so would allow system such as the social robot to use deep reasoning when answering user queries, thereby improving both the overall user experience and overall functional capabilities of the assistant. Therefore, it would have been obvious to combine Creswell with Breazeal, in view of Doyen. Regarding Claim 9, the combination of Breazeal, Doyen, and Creswell discloses the system of Claim 8. The combination of Breazeal and Doyen does not expressly disclose the limitations of Claim 9; however, Creswell discloses wherein after the response is generated, the text added to the historical record before the earliest record in the historical record is removed (Creswell, FIG. 2 teaches an example where the SI model recovers from an error, where it generates an updated inference <read on after response is generated>, which replaces the previous inference <read on removing previously added text>). Creswell is analogous art with respect to Breazeal, in view of Doyen because they are from the same field of endeavor, namely utilizing machine learning to process audio input data. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to implement a large language model (LLM) that incorporates a Chain-of-Thought (COT) framework as taught by Creswell into the teaching of Breazeal, in view of Doyen. The suggestion for doing so would allow system such as the social robot to use deep reasoning when answering user queries, thereby improving both the overall user experience and overall functional capabilities of the assistant. Therefore, it would have been obvious to combine Creswell with Breazeal, in view of Doyen. Regarding Claim 16, the combination of Breazeal and Doyen discloses the method of Claim 15. Additionally, Breazeal further discloses wherein [[the input audio is added to the historical record before an earliest entry in the historical record, wherein]] the LLM receives the historical record and generates the response based on the historical record (Breazeal, [0228]: teaches the social robot utilizing the NLU 122 to generate a response based on a history of past utterances <read on historical record> of the user to engage in back and forth communication), wherein [[after the response is generated the input audio is removed from before the earliest entry in the historical record.]] However, the combination of Breazeal and Doyen does not expressly disclose the input audio is added to the historical record before an earliest entry in the historical record, wherein after the response is generated the input audio is removed from before the earliest entry in the historical record. Creswell discloses the input audio is added to the historical record before an earliest entry in the historical record (Creswell, [Section 4.1 Selection Module]: teaches an overview of the n-shot prompt, where the question and context is added first <read on before earliest record> in the prompt), wherein after the response is generated the input audio is removed from before the earliest entry in the historical record (Creswell, FIG. 2 teaches an example where the SI model recovers from an error, where it generates an updated inference <read on after response is generated>, which replaces the previous inference <read on removing previously added audio>; Note: it should be noted that although "audio" is not expressly stated, one skilled in the art would be able to perform a text-to-speech process to obtain audio; in addition, the input audio is processed via a speech-to-text conversion). Creswell is analogous art with respect to Breazeal, in view of Doyen because they are from the same field of endeavor, namely utilizing machine learning to process audio input data. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to implement a large language model (LLM) that incorporates a Chain-of-Thought (COT) framework as taught by Creswell into the teaching of Breazeal, in view of Doyen. The suggestion for doing so would allow system such as the social robot to use deep reasoning when answering user queries, thereby improving both the overall user experience and overall functional capabilities of the assistant. Therefore, it would have been obvious to combine Creswell with Breazeal, in view of Doyen. Claims 6 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Breazeal et al. (US 20180133900 A1), hereinafter referenced as Breazeal, in view of Doyen et al. (US 20180124373 A1), hereinafter referenced as Doyen as applied to Claims 2 and 12 above respectively, and further in view of Zimmermann et al. (US 20220270316 A1), hereinafter referenced as Zimmermann. Regarding Claims 6 and 20, the combination of Breazeal and Doyen discloses the system and the method of Claims 4 and 12 respectively. The combination of Breazeal and Doyen does not expressly disclose the limitations of Claims 6 and 20; however, Zimmermann discloses an eye tracker configured to determine an eye pose of a viewer of the display (Zimmermann, [0104]: teaches a wearable system <read on eye tracker> determining the eye pose of a user), wherein modifying the three-dimensional image comprises changing an eye-direction of the character to match the eye pose of the viewer (Zimmermann, [0197]: teaches the wearable system mapping "the local component of an interaction 1912a to the virtual avatar 1970 using direct mapping 1962," where it maps "the eye gaze 1940a into an action of the avatar 1970 using direct mapping 1962 <read on changing eye-direction of character>," which results in the avatar's 1970 action reflecting the corresponding local component of the interaction 1912a performed by Alice (e.g., an avatar nods her head when Alice nods her head) <read on eye-direction of character matching eye pose of viewer>). Zimmermann is analogous art with respect to Breazeal, in view of Doyen because they are from the same field of endeavor, namely synthetic avatars that responds to input audio stimuli. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to implement a wearable system that tracks the user's eyes as taught by Zimmermann into the teaching of Breazeal, in view of Doyen. The suggestion for doing so would allow systems, such as the social robot, to determine where and/or what the user is looking at, thereby allowing the system to start a conversation unprompted, resulting in a more seamless user experience. Therefore, it would have been obvious to combine Zimmermann with Breazeal, in view of Doyen. Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Breazeal et al. (US 20180133900 A1), hereinafter referenced as Breazeal, in view of Doyen et al. (US 20180124373 A1), hereinafter referenced as Doyen as applied to Claim 12 above respectively, and further in view of Zhi et al. (US 20240242452 A1), hereinafter referenced as Zhi. Regarding Claim 13, the combination of Breazeal and Doyen discloses the method of Claim 12. The combination of Breazeal and Doyen does not expressly disclose the limitations of Claim 13; however, Zhi discloses wherein receiving the three-dimensional subject comprises generating the three-dimensional subject using a text-to-image artificial intelligence model based on a description for a three-dimensional subject (Zhi, [0025]: teaches a schematic overview 100 of an implementation of text-to-3D avatars <read on text-to-image AI model>, where a text prompt 105 description and image dataset 110 are used as input for stable diffusion 115 to generate a 3D avatar <read on 3D subject>). Zhi is analogous art with respect to Breazeal, in view of Doyen because they are from the same field of endeavor, namely synthetic avatar assistants that utilize neural networks. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to implement a stable diffusion model as taught by Zhi into the teaching of Breazeal, in view of Doyen. The suggestion for doing so would allow the system to generate images based on the request of the user, thereby yielding predictable results. Therefore, it would have been obvious to combine Zhi with Breazeal, in view of Doyen. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Spohrer (US 20200137230 A1) discloses a plurality of virtual assistants that are configured with different characteristics; Davis (US 20130257877 A1) discloses generating an avatar that is configured to represent traits of a human subject; Howard (US 20210232632 A1) discloses a user experience device that generates contextual content based on given context; McIntyre-Kirwin (US 20240303891 A1) discloses controlling a virtual character/avatar using a multi-modal model; Lebaredian et al. (US 20210358188 A1) discloses a virtually animated and interactive agent being rendered for visual and audible communication with one or more users with an application; Kurien et al. (US 20200349224 A1) discloses using two language models recognize correct and incorrect word usage based on context; Wu et al. (US 20240304177 A1) discloses generating 3D content with fine grained emotions and character traits using language models; and Blattner (US 20110148916 A1) discloses audio and animation triggers from analyzed text for a virtual avatar to react upon. Any inquiry concerning this communication or earlier communications from the examiner should be directed to KARL TRUONG whose telephone number is (703)756-5915. The examiner can normally be reached 7:30 AM - 5:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kent Chang can be reached at (571) 272-7667. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /K.D.T./Examiner, Art Unit 2614 /KENT W CHANG/Supervisory Patent Examiner, Art Unit 2614
Read full office action

Prosecution Timeline

Mar 20, 2024
Application Filed
Oct 23, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12573149
DATA PROCESSING METHOD AND APPARATUS, DEVICE, COMPUTER-READABLE STORAGE MEDIUM, AND COMPUTER PROGRAM PRODUCT
2y 5m to grant Granted Mar 10, 2026
Patent 12561875
ANIMATION FRAME DISPLAY METHOD AND APPARATUS, DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Feb 24, 2026
Patent 12494013
AUTODECODING LATENT 3D DIFFUSION MODELS
2y 5m to grant Granted Dec 09, 2025
Patent 12456258
SYSTEMS AND METHODS FOR GENERATING A SHADOW MESH
2y 5m to grant Granted Oct 28, 2025
Patent 12444020
FLEXIBLE IMAGE ASPECT RATIO USING MACHINE LEARNING
2y 5m to grant Granted Oct 14, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
52%
Grant Probability
83%
With Interview (+31.0%)
2y 7m
Median Time to Grant
Low
PTA Risk
Based on 29 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month