Prosecution Insights
Last updated: April 19, 2026
Application No. 18/593,263

ARTIFICIAL INTELLIGENCE AUTOMATED APPLICATION GENERATION

Non-Final OA §101§102§103
Filed
Mar 01, 2024
Examiner
LUU, CUONG V
Art Unit
2192
Tech Center
2100 — Computer Architecture & Software
Assignee
Toyota Motor North America, Inc.
OA Round
1 (Non-Final)
72%
Grant Probability
Favorable
1-2
OA Rounds
3y 6m
To Grant
99%
With Interview

Examiner Intelligence

Grants 72% — above average
72%
Career Allow Rate
692 granted / 963 resolved
+16.9% vs TC avg
Strong +37% interview lift
Without
With
+36.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
36 currently pending
Career history
999
Total Applications
across all art units

Statute-Specific Performance

§101
18.0%
-22.0% vs TC avg
§103
48.6%
+8.6% vs TC avg
§102
17.8%
-22.2% vs TC avg
§112
11.0%
-29.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 963 resolved cases

Office Action

§101 §102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Claim 5 is objected to because of the following informalities: Claim 5 Line 1; insert --the-- before “obtaining” Appropriate correction is required. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1 – 20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Claim 1 Step 1 The claim is statutory because it is directed to a method. Step 2A, prong 1 The claim recites limitations “obtaining a description of a software application; providing … first input data comprising an indication of the description; obtaining … first output data comprising one or more first executable packages of the software application” These steps are directed concepts of developer receiving software descriptions and providing input data and output data based on the software description. These steps can be done by human with aid of paper and pen to write a program. In other words, these steps are directed to a mental process. Step 2A, prong 2 The claim further recites additional step “storing the one or more first executable packages …” and additional elements “AI model, memory, and one or more processors”. The additional step merely stores the executable packages; thus, it is directed to an insignificant extra solution activity. The additional elements are recited as high level of generality and used as a tool to perform the limitations. Thus, the additional elements are not indicative of an integration into a practical application. Steps 2B The claim as a whole is not amounted to significantly more than the judicial exception. Claim 1 is directed to an abstract idea. Therefore, claim 1 is not patent eligible. Analysis of claims 2 – 10 as follow: Claim 2 The claim recites limitations “the description comprises a text description of the software application, an audio description of the software application, a video description of the software application, a visual illustration of the software application, or any combination thereof; the one or more first executable packages comprises one or more instructions that, when executed by the one or more processors, cause the one or more processors to perform one or more actions of the software application in accordance with the description; and the AI model comprises a generative AI model.” The limitations describe description, executable packages, and AI model. Thus, the limitations are insignificant extra solution activity and not integrated into a practical application because they do not impose any meaningful limits on practicing the abstract idea. So, it does not include any additional element that is sufficient to amount to significantly more than the judicial exception. Claim 3 The claim recites limitations “the description comprises: an indication of one or more functionalities of the software application; an indication of an appearance of the software application; an indication of one or more operating platforms in which the software application is executed; an indication of one or more end users of the software application; or any combination thereof.” The limitations describe description. Thus, the limitations are insignificant extra solution activity and not integrated into a practical application because they do not impose any meaningful limits on practicing the abstract idea. So, it does not include any additional element that is sufficient to amount to significantly more than the judicial exception. Claim 4 The claim recites limitations “the AI model is trained to generate a complete software application that is executable by the one or more processors.” The limitations describe AI model functionality. Thus, the limitations are insignificant extra solution activity and not integrated into a practical application because they do not impose any meaningful limits on practicing the abstract idea. So, it does not include any additional element that is sufficient to amount to significantly more than the judicial exception. Claim 5 The claim recites limitations “obtaining the output data comprises generating the output data using a software development kit that is accessible to the AI model.” The limitations generate output data by software development kit. The step of generating output data can be done by human with the aid of paper and pen. Thus, the limitations cover performance of the limitation in the mind. The additional element “software development kit” is a tool to perform the limitations and is recited at high level of generality. Thus, the additional element is not integrated into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Claim 6 The claim recites limitations “training the AI model using training data, wherein the training data comprises a plurality of training descriptions of sample applications and a plurality of executable packages corresponding to the training descriptions.” The limitations describe purpose of training data. Thus, the limitations are insignificant extra solution activity and not integrated into a practical application because they do not impose any meaningful limits on practicing the abstract idea. So, it does not include any additional element that is sufficient to amount to significantly more than the judicial exception. Claim 7 The claim recites limitations “training the AI model using training data, wherein the training data comprises a plurality of training descriptions of sample applications and a plurality of executable packages corresponding to the training descriptions.” The limitations describe purpose of training data. Thus, the limitations are insignificant extra solution activity and not integrated into a practical application because they do not impose any meaningful limits on practicing the abstract idea. So, it does not include any additional element that is sufficient to amount to significantly more than the judicial exception. Claim 7 The claim recites limitations “determining that the one or more first executable packages satisfy one or more criteria.” The limitations determine whether the executable packages meet one or more criteria. The “determine” can be done by human observation and evaluation of the executable packages and criteria. Thus, the limitations cover performance of the limitation in the mind. Thus, the limitations are not integrated into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Claim 8 The claim recites limitations “transferring the one or more first executable packages to a computing device in response to determining that the one or more first executable packages satisfy the one or more criteria.” The limitations determine whether to transfer the executable packages based on the executable packages satisfying the one or more criteria. The limitations can be done by human observation and evaluation of the executable packages and criteria to make decision to transfer the executable packages. Thus, the limitations cover performance of the limitation in the mind. Thus, the limitations are not integrated into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Claim 9 The claim recites limitations “obtaining feedback on the software application; providing, to the AI model, second input data comprising an indication of the feedback; obtaining, from the AI model, second output data comprising one or more second executable packages of the software application, wherein the one or more second executable packages comprises one or more instructions that, when executed by the one or more processors, cause the one or more processors to perform one or more actions of the software application in accordance with the feedback.” These steps are directed concepts of developer receiving feed back and providing input data and output data based on the software description. These steps can be done by human with aid of paper and pen to write a program. In other words, these steps are directed to a mental process. Thus, the limitations are not integrated into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Claim 10 The claim recites limitations “transferring the one or more first executable packages to a portable computing device, wherein the software application comprises a mobile application.” The limitations transfer the executable packages. The “transfer” is insignificant extra solution activity and not integrated into a practical application because they do not impose any meaningful limits on practicing the abstract idea. So, it does not include any additional element that is sufficient to amount to significantly more than the judicial exception. Claim 11 The claim is statutory because it is directed to a system. The claim recites limitations in the same manner as claim 1; therefore, it is rejected for the same reasons. Claims 12 – 20 Claims 12 – 20 recite limitations in the same manner as claims 2 – 10 respectively. Therefore, claims 12 – 20 are also rejected for the same reasons. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1, 3, 7 – 10, 11, 13, and 17 – 20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by BODIN et al. (Pub. No. US 2020/0160270 A1; hereinafter Bodin.) Claim 1 Bodin teaches a method (Bodin; [0349] FIG. 4H shows a flowchart of an exemplary method for conversational bot app developing …), comprising: obtaining a description of a software application (Bodin; Fig. 4H; [0349] … At step 4080, the user device 4055 receives spoken input from the user, e.g., via a microphone connected to or integrated with the user device 4055. The spoken input may comprise, for example, a verbal description of aspects of an app that the user wishes to create in the platform associated with the architecture 2000. For example, the user can verbally describe aspects, components, and functionality of the app, such as “start with a blue splash screen with the enterprise logo and app name, then transition to a login screen with a login object for a user to enter their credentials, then transition to a home screen with a menu containing” …); providing, to an artificial intelligence (AI) model, first input data comprising an indication of the description (Bodin; Fig. 4H; [0350] At step 4082, the system determines an intent of the user based on the spoken input… the conversational bot module 4070 may use NLP to convert the spoken input to data that represents events that are useable by the event analytics engine 2070 and the event AI/NLP engine 2080 (included in the architecture 2000) to determine an intent of the user in the manner described herein …); obtaining, from the AI model, first output data comprising one or more first executable packages of the software application (Bodin; Fig. 4H; [0351 – 0353] At step 4084, the system automatically performs actions in the IDE based on the determined user intent and insights … Still referring to step 4084, in accordance with aspects of the present disclosure, the recommendation engine 2090 causes the determined actions to be performed in the IDE … For example, the recommendation engine 2090 may cause the IDE to create screens, objects, widgets, etc., in an app project in the IDE, wherein the screens, objects, widgets, etc., are determined based on the user's intent determined from the spoken input and the insights determined from big data. … As another example, as depicted at step 4090, the conversational bot module 4070 may build the app and send a beta version of the app to the user device 4055 so that the user can install and run the beta version of the app on the user device 4055 for testing the app on the user device 4055.); and storing the one or more first executable packages in memory for execution by one or more processors (Bodin; Fig. 4H; [0353] … As another example, as depicted at step 4090, the conversational bot module 4070 may build the app and send a beta version of the app to the user device 4055 so that the user can install and run the beta version of the app on the user device 4055 for testing the app on the user device 4055.) Claim 3 Bodin also teaches the description comprises: an indication of one or more functionalities of the software application; an indication of an appearance of the software application; an indication of one or more operating platforms in which the software application is executed; or any combination thereof (Bodin; Fig. 4H; [0349] … At step 4080, the user device 4055 receives spoken input from the user … The spoken input may comprise, for example, a verbal description of aspects of an app that the user wishes to create in the platform associated with the architecture 2000. For example, the user can verbally describe aspects, components, and functionality of the app, such as “start with a blue splash screen with the enterprise logo and app name, then transition to a login screen with a login object for a user to enter their credentials, then transition to a home screen with a menu containing” …) Claim 7 Bodin also teaches determining that the one or more first executable packages satisfy one or more criteria (Bodin; Fig. 4H; [0354] … In this manner, the system provides an iterative process by which the system receives user spoken input, determines intent (criteria) of the spoken input, performs actions in creating the app in the IDE based on the determined intent (criteria), and presents the results of the actions to the user for review. This iterative process can continue until the user is satisfied with the app, at which point the system can automatically publish the app to the marketplace on behalf of the user, e.g., as indicated at step 4092.) When user is satisfied with the app [Wingdings font/0xE0] user intent is met/satisfied. Claim 8 Bodin also teaches transferring the one or more first executable packages to a computing device in response to determining that the one or more first executable packages satisfy the one or more criteria (Bodin; [0345] The architecture 2000 may include a conversational bot app designing functionality that is configured to create an app in the IDE based on user voice commands … present the results of the actions to the user device for review by the user. The process may iterate in this manner any number of times until the user is satisfied with the app that is created by the system based on the user input at the user device. In this manner, implementations of the invention provide a system and method for user to create apps via voice command using their user device (e.g., smartphone) … [0353[ … As another example, as depicted at step 4090, the conversational bot module 4070 may build the app and send (transfer) a beta version of the app to the user device 4055 so that the user can install and run the beta version of the app on the user device 4055 for testing the app on the user device 4055. Claim 9 Bodin also teaches obtaining feedback on the software application; providing, to the AI model, second input data comprising an indication of the feedback (Bodin; Fig. 4H; [0353 – 0354] … As another example, as depicted at step 4090, the conversational bot module 4070 may build the app and send a beta version of the app to the user device 4055 so that the user can install and run the beta version of the app on the user device 4055 for testing the app on the user device 4055. In embodiments, the conversational bot module 4070 is configured to receive feedback from the user via the user device 4055 and to modify or revise the app in the IDE (running in the architecture 2000) based on the feedback. For example, after either of steps 4088 and 4090, the process may return to step 4080 where the user may provide additional spoken input to the user device 4055 after reviewing aspects of the app on the user device 4055. The client 4065 may transmit this additional spoken input to the conversational bot module 4070, which may determine user intent from the additional spoken input in the manner described herein …); obtaining, from the AI model, second output data comprising one or more second executable packages of the software application, wherein the one or more second executable packages comprises one or more instructions that, when executed by the one or more processors, cause the one or more processors to perform one or more actions of the software application in accordance with the feedback (Bodin; [0354] In embodiments, the conversational bot module 4070 is configured to receive feedback from the user via the user device 4055 and to modify or revise the app in the IDE (running in the architecture 2000) based on the feedback. For example, after either of steps 4088 and 4090, the process may return to step 4080 where the user may provide additional spoken input to the user device 4055 after reviewing aspects of the app on the user device 4055. The client 4065 may transmit this additional spoken input to the conversational bot module 4070, which may determine user intent from the additional spoken input in the manner described herein. Based on the user intent determined from the additional spoken input, the system may determine and perform additional actions in the IDE, i.e., to revise the app in the IDE based on the additional spoken input. In this manner, the system provides an iterative process by which the system receives user spoken input, determines intent of the spoken input, performs actions in creating the app in the IDE based on the determined intent, and presents the results of the actions to the user for review. This iterative process can continue until the user is satisfied with the app, at which point the system can automatically publish the app to the marketplace on behalf of the user, e.g., as indicated at step 4092.) Claim 10 Bodin also teaches transferring the one or more first executable packages to a portable computing device, wherein the software application comprises a mobile application (Bodin; [0345] The architecture 2000 may include a conversational bot app designing functionality that is configured to create an app in the IDE based on user voice commands … present the results of the actions to the user device for review by the user. The process may iterate in this manner any number of times until the user is satisfied with the app that is created by the system based on the user input at the user device. In this manner, implementations of the invention provide a system and method for user to create apps via voice command using their user device (e.g., smartphone) … [0353[ … As another example, as depicted at step 4090, the conversational bot module 4070 may build the app and send (transfer) a beta version of the app to the user device 4055 so that the user can install and run the beta version of the app on the user device 4055 for testing the app on the user device 4055.) (Emphasis added.) Claim 11 This is a system version of the method version in claim 1; therefore, it is rejected for the same reasons. Furthermore, Bodin also teaches a system comprising one or more memories; and one or more processors (Bodin; [0345] The architecture 2000 may include a conversational bot app designing functionality that is configured to create an app in the IDE based on user voice commands … implementations of the invention provide a system and method for user to create apps via voice command using their user device (e.g., smartphone) …) smartphone comprises one or more processor and one or more memories. Claim 13 This limitation is already discussed in claim 3; therefore, it is rejected for the same reasons. Claim 17 This limitation is already discussed in claim 7; therefore, it is rejected for the same reasons. Claim 18 This limitation is already discussed in claim 8; therefore, it is rejected for the same reasons. Claim 19 This limitation is already discussed in claim 9; therefore, it is rejected for the same reasons. Claim 20 This limitation is already discussed in claim 10; therefore, it is rejected for the same reasons. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 2, 4, 6, 12, 14, and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Bodin in view of Arora et al. (Pub. No. US 2025/0244960 A1; hereinafter Arora.) Claim 2 Bodin teaches the description comprises a text description of the software application, an audio description of the software application, , or any combination thereof (Bodin; Fig. 4H; [0349] … At step 4080, the user device 4055 receives spoken input from the user, e.g., via a microphone connected to or integrated with the user device 4055. The spoken input may comprise, for example, a verbal description of aspects of an app that the user wishes to create in the platform associated with the architecture 2000 … Alternatively, the client 4065 may convert the audio data of the spoken input to text data using speech to text techniques, and then transmit the text data to the conversational bot module 4070 …); the one or more first executable packages comprises one or more instructions that, when executed by the one or more processors, cause the one or more processors to perform one or more actions of the software application in accordance with the description (Bodin; Fig. 4H; [0353] … As another example, as depicted at step 4090, the conversational bot module 4070 may build the app and send a beta version of the app to the user device 4055 so that the user can install and run the beta version of the app on the user device 4055 for testing the app on the user device 4055.) But Bodin does not explicitly teach the description comprises the AI model comprises a generative AI model. However, Arora teaches the description comprises a text description of the software application, an audio description of the software application, a video description of the software application, a visual illustration of the software application, or any combination thereof (Arora; Fig. 4; [0058] FIG. 4 is a flowchart diagram depicting an example method of generating computer-executable code using a code editing system … [0060] At 402, method 400 can include receiving a user query (description of software application) at a model interface portion of a code editor user interface … The user query can include one or more prompts which may include text data, audio data, video data, image data, and various combinations thereof … At 404, method 400 can include providing the user query as one or more inputs to one or more machine-learned generative models … At 406, method 400 can include receiving one or more outputs from the generative model(s) including executable code generated in response to the user query … [0046] Generative model interface 266 includes a prompt editor 268 that allows users to formulate and submit users queries such as prompts to the generative model(s) 132. Prompt editor 268 can include an interface for receiving text inputs, image inputs, video inputs, or any other type of data. By way of example, a user can upload a file and provide a text input to generate a prompt, such as “are there any significant trends in the data of this file.” …); the AI model comprises a generative AI model (Arora; [0002] Artificial intelligence systems increasingly include large foundational machine-learned models which have the capability to provide a wide range of new product experiences. As an example, machine-learned generative models have proven successful at generating content including computer-executable code … [0029] … The systems and methods provide a single user interface (e.g., graphical user interface) for editing and executing computer-executable code, as well as accessing one or more machine-learned generative model(s) for editing and executing computer-executable code … A user can select compute, making it possible to build deep learning and generative artificial intelligence applications within a single user interface.) Bodin and Arora are in the same analogous art as they are in the same field of endeavor, generating code by artificial intelligence (AI.) Therefore, it would have been obvious to one with ordinary skill, in the art before the effective filing date of the claimed invention, to incorporate Arora teachings into Bodin invention to explicitly allow Bodin AI to also include generative AI to generate code from software description, as suggested by Arora ([0002 & 0029],) as the generative AI can learn from large sets of information and boost productivity through task automation and enhancing creativity. Claim 4 Arora teaches the AI model is trained to generate a complete software application that is executable by the one or more processors (Arora; [0002] Artificial intelligence systems increasingly include large foundational machine-learned models which have the capability to provide a wide range of new product experiences. As an example, machine-learned generative models have proven successful at generating content including computer-executable code … [0029] … The systems and methods provide a single user interface (e.g., graphical user interface) for editing and executing computer-executable code, as well as accessing one or more machine-learned generative model(s) for editing and executing computer-executable code … A user can select compute, making it possible to build deep learning and generative artificial intelligence applications within a single user interface. [0040] … The generative content generated by generative models 115 can include computer-executable code data, text data, image data, video data, audio data, or other types of generative content. The generative model can be trained to process input data to generate output data. The input data can include text data, image data, audio data, latent encoding data, and/or other input data, which may include multimodal data. The output data can include computer-executable code data… And, see Fig. 5 & [0068 – 0074] train models.) Motivation for incorporating Arora into Bodin is the same as motivation in claim 2. Claim 6 Arora teaches training the AI model using training data, wherein the training data comprises a plurality of training descriptions of sample applications (Arora; [0052] FIG. 3D additionally depicts an example text prompt 369-1 that is received by the code editor via the prompt editor interface 368. In this example, a user has provided a text input, “Create a pipeline that will create an animal classifier based on this data.” (description) Fig. 4; [0060] At 402, method 400 can include receiving a user query at a model interface portion of a code editor user interface … The user query can include one or more prompts (description) which may include text data, audio data, video data, image data, and various combinations thereof … [0062] At 406, method 400 can include receiving one or more outputs from the generative model(s) including executable code generated in response to the user query … The generative model can be trained to process input data to generate output data. The input data (description) can include text data, image data, audio data, latent encoding data, and/or other input data, which may include multimodal data. The output data can include computer-executable code data …) and a plurality of executable packages corresponding to the training descriptions (Arora; Figs. 6 & 11; [0075] FIG. 6 is a block diagram of an example processing flow for using machine-learned model(s) 1 to process input(s) 2 to generate output(s) 3. [0080 – 0082] Example data types for input(s) 2 (description) or output(s) 3 (executable package(s)) include natural language text data, software code data (e.g., source code, object code, machine code, or any other form of computer-readable instructions or programming languages) … In multimodal inputs 2 or outputs 3, example combinations of data types include image data and audio data, image data and natural language data, natural language data and software code data, image data and biometric data, sensor data and medical data … [0135 – 0136] FIG. 11 is a block diagram of an inference system for operating one or more machine-learned model(s) 1 to perform inference (e.g., for training, for deployment, etc.). A model host 31 can receive machine-learned model(s) 1. Model host 31 can host one or more model instance(s) 31-1 … (emphasis added.) Model host 31 can perform inference on behalf of one or more client(s) 32. Client(s) 32 can transmit an input request 33 to model host 31. Using input request 33, model host 31 can obtain input(s) 2 for input to machine-learned model(s) 1. Machine-learned model(s) 1 can process input(s) 2 to generate output(s) 3. Using output(s) 3, model host 31 can return an output payload 34 for responding to input request 33 from client(s) 32. Output payload 34 can include or be based on output(s) 3.) The inputs 2 and output 3 were used by the machine-learned model 1 to produce output 34 during training. Motivation for incorporating Arora into Bodin is the same as motivation in claim 2. Claim 12 This limitation is already discussed in claim 2; therefore, it is also rejected for the same reasons. Claim 14 This limitation is already discussed in claim 4; therefore, it is also rejected for the same reasons. Claim 16 This limitation is already discussed in claim 6; therefore, it is also rejected for the same reasons. Claims 5 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Bodin in view of Abraham et al. (Pub. No. US 2025/0077290 A1; hereinafter Abraham.) Claim 5 Bodin does not explicitly teach obtaining the output data comprises generating the output data using a software development kit that is accessible to the AI model. However, Abraham teaches obtaining the output data comprises generating the output data using a software development kit that is accessible to the AI model (Abraham; Fig. 3; [0052] … AI engine 310 includes a plurality of ML models 320. Each ML model 320 is trained to generate sets of optimized software code 400 for a specific computerized task 340 … In additional embodiments of the invention, ML models 320 are configured to receive second inputs 350 that define Software Development Kits (SDKs) 220 versions specific to the task and/or third inputs 360 that define user-specified constraints 230 (i.e., criteria or the like) used by the ML model 320 to determine the most optimal 342 set of computer codes 400 …) Bodin and Abraham are in the same analogous art as they are in the same field of endeavor, generating code by artificial intelligence (AI.) Therefore, it would have been obvious to one with ordinary skill, in the art before the effective filing date of the claimed invention, to incorporate Abraham teachings into Bodin invention to explicitly allow Bodin AI to include software development kit (SDK) for a specified task, as a result, determining an optimal set of software codes for a specified task that is specific to one or more of the available hardware resources as suggested by Abraham ([0044].) Claim 15 This limitation is already discussed in claim 5; therefore, it is rejected for the same reasons. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to CUONG V LUU whose telephone number is (571)270-1733. The examiner can normally be reached 6:30 AM - 3:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Hyung S. Sough can be reached at (571) 272-6799. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CUONG V LUU/Examiner, Art Unit 2192 /S. Sough/SPE, Art Unit 2192
Read full office action

Prosecution Timeline

Mar 01, 2024
Application Filed
Mar 06, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602208
SYSTEM AND METHOD FOR SOURCE CODE GENERATION
2y 5m to grant Granted Apr 14, 2026
Patent 12585435
REAL-TIME VISUALIZATION OF COMPLEX SOFTWARE ARCHITECTURE
2y 5m to grant Granted Mar 24, 2026
Patent 12572714
INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND COMPUTER PROGRAM PRODUCT
2y 5m to grant Granted Mar 10, 2026
Patent 12572447
SELECTIVE TRACING OF ENTITIES DURING CODE EXECUTION USING DYNAMIC TRACING CONFIGURATION
2y 5m to grant Granted Mar 10, 2026
Patent 12561396
PERSONALIZED PARTICULATE MATTER EXPOSURE MANAGEMENT USING FINE-GRAINED WEATHER MODELING AND OPTIMAL CONTROL THEORY
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
72%
Grant Probability
99%
With Interview (+36.7%)
3y 6m
Median Time to Grant
Low
PTA Risk
Based on 963 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month