Prosecution Insights
Last updated: April 19, 2026
Application No. 18/218,048

FRONT-END USER INTERFACE DESIGN TOOL AND HUMAN READABLE CODE GENERATOR

Non-Final OA §103§112§DP
Filed
Jul 04, 2023
Examiner
NGUYEN, DUY KHUONG THANH
Art Unit
2199
Tech Center
2100 — Computer Architecture & Software
Assignee
Native Pixel Inc.
OA Round
1 (Non-Final)
82%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
440 granted / 539 resolved
+26.6% vs TC avg
Strong +35% interview lift
Without
With
+35.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
38 currently pending
Career history
577
Total Applications
across all art units

Statute-Specific Performance

§101
13.3%
-26.7% vs TC avg
§103
59.8%
+19.8% vs TC avg
§102
6.3%
-33.7% vs TC avg
§112
9.6%
-30.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 539 resolved cases

Office Action

§103 §112 §DP
Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . 2. This is the initial office action based on the application filed on July 04th, 2023, which claims 1-20 are presented for examination. Status of Claims 3. Claims 1-20 are pending, of which claims, of which claim 1 and 15 are in independent form. Priority 4. No priority has been considered for this application Information Disclosure Statement 5. Information disclosure statement filed on 07/04/2023, has been reviewed and considered by Examiner. The Office's Note: 6. The Office has cited particular paragraphs / columns and line numbers in the reference(s) applied to the claims above for the convenience of the Applicant. Although the specified citations are representative of the teachings of the art and are applied to specific limitations within the individual claim(s), other passages and figures may apply as well. It is respectfully requested from the Applicant in preparing responses, to fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the cited passages as taught by the prior art or relied upon by the Examiner. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. 7. Claim 1 recites the limitation “the design elements” in line 10. There is insufficient antecedent basis for this limitation in the claim. 8. Claim 14 recites the limitation “The system of claim 30” in line 10. There is insufficient antecedent basis for this limitation in the claim. The Office believe it should be “The method of claim 13”. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. 9. Claims 1-20 provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1-20 of copending Application No. 18218051 . Although the claims at issue are not identical, they are not patentably distinct from each other because claim 1-20 of copending Application No. 18218051 teach UI design system with automatic front-end/back-end code generation vs . claim 1-20 of the instant Application No. 18218048 teach front-end user interface design tool and human readable code generation. This is a provisional nonstatutory double patenting rejection because the patentably indistinct claims have not in fact been patented. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 10. Claims 1-4, 6, 8-15 and 17-19 are rejected under 35 U.S.C. 103 as being unpatentable over Kumar (US 2019025089 – hereinafter Kumar), in view of Anderson (US 20220083316– hereinafter Anderson) and further in view of Parsolano (, US 20180275971– hereinafter Parsolano). Claim 1 is rejected, Kumar teaches a method for automatically generating front-end code for a user interface (UI) design created in a graphical UI editor, comprising (Kumar, abstract and summary): receiving a design file key and an access token associated with the design file (Kumar, US 2019025089, fig. 18 and para [0197], IMS 1828 may be configured to provide various security-related services such as identity services, such as information access management, authentication and authorization services, services for managing customer identities and roles and related capabilities, and the like.); enabling user selection of the one or more frames for import and retrieving the selected frames along with their components(Kumar, Para [0082], User interactions with model generation system 102 may take various forms. A user may provide GUI screen images 104 to model generation system 102 via these interactions using one or more interfaces provided by model generation system 102. Para [0135-0136], enables a user to import (e.g., drag and place) a GUI screen image designed by a UI developer.); retrieving image files used within the one or more frames (Kumar, fig. 1 and para [0059-0060], GUI screen images 104 may be generated using a computer aided design tool and saved in a digital format, or may be generated manually as sketches on paper and then be scanned into digital images. Para [0082], User interactions with model generation system 102 may take various forms. A user may provide GUI screen images 104 to model generation system 102 via these interactions using one or more interfaces provided by model generation system 102.); mapping the design elements to code templates for one or more front-end frameworks(Kumar, fig. 1 and para [0023], generating the one or more implementations of the GUI based upon the GUI model may include generating the one or more implementations of the GUI using the GUI model and one or more code generation templates, where each code generation template is associated with a platform or a programming language. Para [0025], mapping the features extracted from the plurality of training images to data points in a multi-dimensional space, where the data points may form a set of clusters in the multi-dimensional space; extracting features from the sub-image of the first UI component or the new UI component; mapping the features extracted from the sub-image of the first UI component or the new UI component to a data point in the multi-dimensional space) ; generating front-end code based on the mapped design elements and code templates(Kumar, para [0076-0078], GUI model 124 may include information specifying a particular GUI window or screen comprising a particular set of UI components and mapped to a particular set of functions or actions. A GUI implementation (e.g., the code or instructions implementing the GUI) generated based upon GUI model 124 may include code and logic for instantiating the particular GUI screen with the particular set of UI components and mapped to the particular set of functions or actions. Para [0079], In certain embodiments, GUI implementations 110, 112, and 114 may each correspond to a code generation template that can be used to implement the GUI. A code generation template may include one or more source code files containing high-level code (which may include methods, functions, classes, event handlers, and the like) that can be compiled or interpreted to generate a GUI executable for executing by one or more processors of a computer system.); and outputting the generated front-end code in a format suitable for use in a web or mobile application(Kumar, para [0079], In certain embodiments, GUI implementations 110, 112, and 114 may each correspond to a code generation template that can be used to implement the GUI. A code generation template may include one or more source code files containing high-level code (which may include methods, functions, classes, event handlers, and the like) that can be compiled or interpreted to generate a GUI executable for executing by one or more processors of a computer system. Para [106], Page generator 308 may take GUI model 306 as input and generate code implementing the GUI in a target language for a target platform, such as a mobile device that is operated using iOS® or Android® or a system with a wide-screen that is operated using iOS®, Windows®, or Linux). Kumar does not explicitly teach using the design file key and access token to retrieve one or more outermost frames within a project; obtaining thumbnails of the one or more frames; However, Anderson teaches using the design file key and access token to retrieve one or more outermost frames within a project (Anderson, US 20220083316, para [0040-0041], The program interface 102 may also retrieve programmatic resources that include an application framework for use with canvas 122.The application framework can include data sets which define or configure, for example, a set of interactive graphic tools (e.g., design tool panels 318, see FIG. 3A through FIG. 3F) that integrate with the canvas 122 and which comprise the design interface 118, to enable the design user to provide input for creating and/or editing the design of the user interface. Para [0042], The program interface 102 can be used to retrieve, from local or remote sources, programmatic resources and data sets which include files 101 which comprise an active workspace for the design user. The retrieved data sets can include one or more pages that include design elements which collectively form a UI design under edit. Each file 101 can include one or multiple data structure representations 111 which collectively define the design interface. The files 101 may also include additional data sets which are associated with the active workspace.); It would have obvious to one having ordinary skill in the art before the effecting filing date of the claimed invention to combine the teachings of cited references. Thus, one of ordinary skill in the art before the effecting filing date of the claimed invention would have been motivated to incorporate Anderson into Kumar to cause the user device to perform operations, and provides a design interface, where the design interface includes multiple tool panels and a canvas. The design interface enables design users to specify design input to create or modify design under edit on the canvas. The processor enables multiple users to interact with the design interface to define a variant set.as suggested by Anderson (See abstract and summary). Kumar and Anderson do not explicitly teach obtaining thumbnails of the one or more frames; However, Parsolano teaches obtaining thumbnails of the one or more frames (Parsolano, US 20180275971, fig. 5 and para [0059], thumbnail); It would have obvious to one having ordinary skill in the art before the effecting filing date of the claimed invention to combine the teachings of cited references. Thus, one of ordinary skill in the art before the effecting filing date of the claimed invention would have been motivated to incorporate Parsolano into Kumar and Anderson to convert graphical user interface (GUI) design elements into application programming interface (API)-integratable computer code and logic as suggested by Anderson (See abstract and summary). Claim 2 is rejected for the reasons set forth hereinabove for claim 1, Kumar, Anderson and Parsolano teach the method of claim 1, wherein the design file key is obtained from a URL of the design file (Anderson, fig. 1B and para [0058-0059], In an example of FIG. 1B, the network computing system 150 perform operations to enable the IGDS 100 to be implemented on the user computing device 10. In variations, the network computing system 150 provides a network service 152 to support the use of the IGDS 100 by user computing devices that utilize browsers or other web-based applications. The network computing system 150 can include a site manager 158 to manage a website where a set of web-resources 155 (e.g., web page) are made available for site visitors. The web-resources 155 can include instructions, such as scripts or other logic (“IGDS instructions 157”), which are executable by browsers or web components of user computing devices.). Claim 3 is rejected for the reasons set forth hereinabove for claim 1, Kumar, Anderson and Parsolano teach the method of claim 1, wherein the access token is generated by a user with access to the design file in Figma (Kumar, fig. 18 and para [0197], IMS 1828 may be configured to provide various security-related services such as identity services, such as information access management, authentication and authorization services, services for managing customer identities and roles and related capabilities, and the like. Anderson, para [0040-0042].). Claim 4 is rejected for the reasons set forth hereinabove for claim 1, Kumar, Anderson and Parsolano teach the method of claim 1, wherein the one or more frames in Figma are grouped within "Frame" elements for import(Kumar, Para [0082], User interactions with model generation system 102 may take various forms. A user may provide GUI screen images 104 to model generation system 102 via these interactions using one or more interfaces provided by model generation system 102. Para [0135-0136], enables a user to import (e.g., drag and place) a GUI screen image designed by a UI developer.). Claim 6 is rejected for the reasons set forth hereinabove for claim 1, Kumar, Anderson and Parsolano teach the method of claim 1, wherein the component names within the frames are adjusted by removing whitespaces and ensuring uniqueness for identification purposes (Parsolano, para [0070-0071], when the app developer wants to make changes to the UI, they simply update the UI objects and logic stored in the Cloud. Thus, one advantage is that by using a unique serial ID, the shell programs on Windows and MacOS may detect that their UI's are out of sync with the latest locally-cached version and request the updated versions from the Cloud.). Claim 8 is rejected for the reasons set forth hereinabove for claim 1, Kumar, Anderson and Parsolano teach the method of claim 1, comprising selecting multiple frames and their associated components for import(Kumar, Para [0082], User interactions with model generation system 102 may take various forms. A user may provide GUI screen images 104 to model generation system 102 via these interactions using one or more interfaces provided by model generation system 102. Para [0135-0136], enables a user to import (e.g., drag and place) a GUI screen image designed by a UI developer.). Claim 9 is rejected for the reasons set forth hereinabove for claim 1, Kumar, Anderson and Parsolano teach the method of claim 1, comprising structuring imported frames and components according to their hierarchy as observed in an imported project (Kumar, Para [0082], User interactions with model generation system 102 may take various forms. A user may provide GUI screen images 104 to model generation system 102 via these interactions using one or more interfaces provided by model generation system 102. Para [0135-0136], enables a user to import (e.g., drag and place) a GUI screen image designed by a UI developer.). Claim 10 is rejected for the reasons set forth hereinabove for claim 1, Kumar, Anderson and Parsolano teach the method of claim 1, comprising generating output for a target language (Kumar, para [00148], In some embodiments, the GUI model (e.g., in JSON format) can be used to generate code in different programming languages, such as markup languages (e.g., HTML or XML) or stylesheet languages (e.g., cascading style sheet (CSS)).). Claim 11 is rejected for the reasons set forth hereinabove for claim 1, Kumar, Anderson and Parsolano teach the method of claim 10, wherein the target language comprises hypertext markup language, cascading style sheet, JavaScript, React, Angular, or Vue (Kumar, para [00148], In some embodiments, the GUI model (e.g., in JSON format) can be used to generate code in different programming languages, such as markup languages (e.g., HTML or XML) or stylesheet languages (e.g., cascading style sheet (CSS)).). Claim 12 is rejected for the reasons set forth hereinabove for claim 1, Kumar, Anderson and Parsolano teach the method of claim 1, comprising displaying a user interface design and allowing the user to select options for front-end code generation, such as framework and language selection(Kumar, para [00148], In some embodiments, the GUI model (e.g., in JSON format) can be used to generate code in different programming languages, such as markup languages (e.g., HTML or XML) or stylesheet languages (e.g., cascading style sheet (CSS)). Para [0059], GUI implementations 110, 112, and 114 may be executable by one or more processors to display the GUI on different platforms.). Claim 13 is rejected for the reasons set forth hereinabove for claim 1, Kumar, Anderson and Parsolano teach the method of claim 1, comprising detecting an input information regarding a desired GUI on a first device, then recursively selecting a native code template that describes GUI features on a second device similar to the feature from the input information regarding the desired GUI, and creating an identical visual image on the second device as on the first device, but with each device utilizing properties and elements within its own native code to create the visual image(Parsolano, abstract, Disclosed is a user interface (UI) Platform system for converting graphical user interface (GUI) design elements into API-integratable computer code (including native code) and logic, comprising a software tool. Para [0033], When a timeline is an input, the software algorithm further accepts information describing the intended receiving computing device 340 that the graphical user interface will ultimately be used on. Here, the software algorithm preferably comprises at least three templates for software code that describes/defines a graphical user interface in the native code environment of the receiving computing device 340. In this procedure the software algorithm recursively compares the input information regarding the desired graphical user interface with the relevant templates to choose the template that closest represents the features of the information input to describe the desired graphical user interface (the “best fit template”). Next, the software algorithm further alters the recursively-selected template to match at least one of sizing, colors, position, layout, and function of the input information describing the desired graphical user interface. Para [0041-0042]. Para [0060-0063], The components created, designed, and manipulated in the UI Platform are called UI Components, or synonymously, Data Aware Components. One natural extension of Data Aware Components is the ability to attach programming logic to a UI object. Accordingly, each UI element can be programmed, and when exported, the UI and associated logic is converted by a template to a (sometimes) different programming language. In a preferred embodiment, the preferred programming language for the UI Platform/client is JavaScript. ). Claim 14 is rejected for the reasons set forth hereinabove for claim 30, Kumar, Anderson and Parsolano teach the system of claim 30, wherein the native code templates related to features of a GUI include code templates related to any UI element, operation or event, such as buttons, text areas, animations, photographs, movies, text entry fields, or interfaces with backend application code(Anderson, para [0056], In an another alternative, elements including animations, movies, audio, pictures, buttons, and text input areas are described in the input graphic files and the additional input information. The software algorithm 400 then develops native source code for each of these elements that are included in the input graphic files (along with the additional input information). Upon reading this disclosure, other variation and processes for achieving substantially similar results are apparent to those of ordinary skill in the software programming arts.). Claim 15 is rejected, Kumar teaches a system, comprising: a hardware processor (Kumar, para [0005], processors); and a front end development software tool executed by the processor using a first UI element associated with a first function specified using a predetermined language, a translation layer for associating the function associated with the first UI element with a device native computer code associated with a first device(Kumar, fig. 1 and para [0059], As shown in FIG. 1, system 100 may include a model generation system (MGS) 102 that is configured to receive one or more GUI screen images 104 for a GUI as input and generate a GUI model 124 for the GUI based upon the one or more GUI screen images 104. GUI model 124 may then be consumed by one or more downstream model consumers 103, who may generate one or more GUI implementations 110, 112, and 114 of the GUI based upon GUI model 124 substantially free of manual coding. GUI implementations 110, 112, and 114 may be executable by one or more processors to display the GUI on different platforms. Para [0060-0064], GUI screen images 104 may be generated using a computer aided design tool and saved in a digital format, or may be generated manually as sketches on paper and then be scanned into digital images. Para [0082], User interactions with model generation system 102 may take various forms. A user may provide GUI screen images 104 to model generation system 102 via these interactions using one or more interfaces provided by model generation system 102. Para [0135-0136], enables a user to import (e.g., drag and place) a GUI screen image designed by a UI developer. Kumar, para [00148], In some embodiments, the GUI model (e.g., in JSON format) can be used to generate code in different programming languages, such as markup languages (e.g., HTML or XML) or stylesheet languages (e.g., cascading style sheet (CSS)). Para [0059], GUI implementations 110, 112, and 114 may be executable by one or more processors to display the GUI on different platforms.); and a template that converts data from a first function into computer-generated human readable created code as an API-integratable computer code and logic that is native to a second device(Kumar, para [0076-0078], GUI model 124 may include information specifying a particular GUI window or screen comprising a particular set of UI components and mapped to a particular set of functions or actions. A GUI implementation (e.g., the code or instructions implementing the GUI) generated based upon GUI model 124 may include code and logic for instantiating the particular GUI screen with the particular set of UI components and mapped to the particular set of functions or actions. Para [0079], In certain embodiments, GUI implementations 110, 112, and 114 may each correspond to a code generation template that can be used to implement the GUI. A code generation template may include one or more source code files containing high-level code (which may include methods, functions, classes, event handlers, and the like) that can be compiled or interpreted to generate a GUI executable for executing by one or more processors of a computer system. Para [106], Page generator 308 may take GUI model 306 as input and generate code implementing the GUI in a target language for a target platform, such as a mobile device that is operated using iOS® or Android® or a system with a wide-screen that is operated using iOS®, Windows®, or Linux. Kumar, para [00148], In some embodiments, the GUI model (e.g., in JSON format) can be used to generate code in different programming languages, such as markup languages (e.g., HTML or XML) or stylesheet languages (e.g., cascading style sheet (CSS)). Para [0059], GUI implementations 110, 112, and 114 may be executable by one or more processors to display the GUI on different platforms.). The Office would like to use prior art Anderson to back up Kumar to further teach limitation a front end development software tool(Anderson, para [0038-0049], According to examples, user of computing device 10 operates web-based application 80 to access a network site, where programmatic resources are retrieved and executed to implement the IGDS 100. In this way, a design user may initiate a session to implement the IGDS 100 for purpose of designing a run-time user interface for an application or program. In examples, the IGDS 100 includes a program interface 102, a design interface 118, and a rendering engine 120. The program interface 102 can include one or more processes which execute to access and retrieve programmatic resources from local and/or remote sources.). It would have obvious to one having ordinary skill in the art before the effecting filing date of the claimed invention to combine the teachings of cited references. Thus, one of ordinary skill in the art before the effecting filing date of the claimed invention would have been motivated to incorporate Anderson into Kumar to cause the user device to perform operations, and provides a design interface, where the design interface includes multiple tool panels and a canvas. The design interface enables design users to specify design input to create or modify design under edit on the canvas. The processor enables multiple users to interact with the design interface to define a variant set.as suggested by Anderson (See abstract and summary). The Office would like to use prior art Parsolano to back up Kumar and Anderson to further teach limitation a template that converts data from a first function into computer-generated human readable created code as an API-integratable computer code and logic that is native to a second device(Parsolano, abstract, Disclosed is a user interface (UI) Platform system for converting graphical user interface (GUI) design elements into API-integratable computer code (including native code) and logic, comprising a software tool. Para [0033 and 0041-0042], template. Para [0060-0063], The components created, designed, and manipulated in the UI Platform are called UI Components, or synonymously, Data Aware Components. One natural extension of Data Aware Components is the ability to attach programming logic to a UI object. Accordingly, each UI element can be programmed, and when exported, the UI and associated logic is converted by a template to a (sometimes) different programming language. In a preferred embodiment, the preferred programming language for the UI Platform/client is JavaScript.) It would have obvious to one having ordinary skill in the art before the effecting filing date of the claimed invention to combine the teachings of cited references. Thus, one of ordinary skill in the art before the effecting filing date of the claimed invention would have been motivated to incorporate Parsolano into Kumar and Anderson to convert graphical user interface (GUI) design elements into application programming interface (API)-integratable computer code and logic as suggested by Anderson (See abstract and summary). Claim 17 is rejected for the reasons set forth hereinabove for claim 15, Kumar, Anderson and Parsolano teach the system of claim 15, comprising computer readable code for: receiving a design file key and an access token associated with the design file (Kumar, fig. 18 and para [0197], IMS 1828 may be configured to provide various security-related services such as identity services, such as information access management, authentication and authorization services, services for managing customer identities and roles and related capabilities, and the like.); using the design file key and access token to retrieve one or more outermost frames within a project (Anderson, US 20220083316, para [0040-0041], The program interface 102 may also retrieve programmatic resources that include an application framework for use with canvas 122.The application framework can include data sets which define or configure, for example, a set of interactive graphic tools (e.g., design tool panels 318, see FIG. 3A through FIG. 3F) that integrate with the canvas 122 and which comprise the design interface 118, to enable the design user to provide input for creating and/or editing the design of the user interface. Para [0042], The program interface 102 can be used to retrieve, from local or remote sources, programmatic resources and data sets which include files 101 which comprise an active workspace for the design user. The retrieved data sets can include one or more pages that include design elements which collectively form a UI design under edit. Each file 101 can include one or multiple data structure representations 111 which collectively define the design interface. The files 101 may also include additional data sets which are associated with the active workspace.); obtaining thumbnails of the one or more frames (Parsolano, US 20180275971, fig. 5 and para [0059], thumbnail); enabling user selection of the one or more frames for import and retrieving the selected frames along with their components(Kumar, Para [0082], User interactions with model generation system 102 may take various forms. A user may provide GUI screen images 104 to model generation system 102 via these interactions using one or more interfaces provided by model generation system 102. Para [0135-0136], enables a user to import (e.g., drag and place) a GUI screen image designed by a UI developer.); retrieving image files used within the one or more frames (Kumar, fig. 1 and para [0059-0060], GUI screen images 104 may be generated using a computer aided design tool and saved in a digital format, or may be generated manually as sketches on paper and then be scanned into digital images. Para [0082], User interactions with model generation system 102 may take various forms. A user may provide GUI screen images 104 to model generation system 102 via these interactions using one or more interfaces provided by model generation system 102.); mapping the design elements to code templates for one or more front-end frameworks(Kumar, fig. 1 and para [0023], generating the one or more implementations of the GUI based upon the GUI model may include generating the one or more implementations of the GUI using the GUI model and one or more code generation templates, where each code generation template is associated with a platform or a programming language. Para [0025], mapping the features extracted from the plurality of training images to data points in a multi-dimensional space, where the data points may form a set of clusters in the multi-dimensional space; extracting features from the sub-image of the first UI component or the new UI component; mapping the features extracted from the sub-image of the first UI component or the new UI component to a data point in the multi-dimensional space) ; generating front-end code based on the mapped design elements and code templates(Kumar, para [0076-0078], GUI model 124 may include information specifying a particular GUI window or screen comprising a particular set of UI components and mapped to a particular set of functions or actions. A GUI implementation (e.g., the code or instructions implementing the GUI) generated based upon GUI model 124 may include code and logic for instantiating the particular GUI screen with the particular set of UI components and mapped to the particular set of functions or actions. Para [0079], In certain embodiments, GUI implementations 110, 112, and 114 may each correspond to a code generation template that can be used to implement the GUI. A code generation template may include one or more source code files containing high-level code (which may include methods, functions, classes, event handlers, and the like) that can be compiled or interpreted to generate a GUI executable for executing by one or more processors of a computer system.); and outputting the generated front-end code in a format suitable for use in a web or mobile application(Kumar, para [0079], In certain embodiments, GUI implementations 110, 112, and 114 may each correspond to a code generation template that can be used to implement the GUI. A code generation template may include one or more source code files containing high-level code (which may include methods, functions, classes, event handlers, and the like) that can be compiled or interpreted to generate a GUI executable for executing by one or more processors of a computer system. Para [106], Page generator 308 may take GUI model 306 as input and generate code implementing the GUI in a target language for a target platform, such as a mobile device that is operated using iOS® or Android® or a system with a wide-screen that is operated using iOS®, Windows®, or Linux). Claim 18 is rejected for the reasons set forth hereinabove for claim 17, Kumar, Anderson and Parsolano teach the system of claim 17, wherein the design file key is obtained from a URL of the design file(Anderson, fig. 1B and para [0058-0059], In an example of FIG. 1B, the network computing system 150 perform operations to enable the IGDS 100 to be implemented on the user computing device 10. In variations, the network computing system 150 provides a network service 152 to support the use of the IGDS 100 by user computing devices that utilize browsers or other web-based applications. The network computing system 150 can include a site manager 158 to manage a website where a set of web-resources 155 (e.g., web page) are made available for site visitors. The web-resources 155 can include instructions, such as scripts or other logic (“IGDS instructions 157”), which are executable by browsers or web components of user computing devices.). Claim 19 is rejected for the reasons set forth hereinabove for claim 17, Kumar, Anderson and Parsolano teach the system of claim 17, wherein the access token is generated by a user with access to the design file in Figma(Kumar, fig. 18 and para [0197], IMS 1828 may be configured to provide various security-related services such as identity services, such as information access management, authentication and authorization services, services for managing customer identities and roles and related capabilities, and the like. Anderson, para [0040-0042].). 11. Claims 5, 7, 16 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Kumar (US 2019025089 – hereinafter Kumar), in view of Anderson (US 20220083316– hereinafter Anderson), in view of Parsolano (US 20180275971– hereinafter Parsolano) and further in view of Kaliyaperumal (US 20230095089 – hereinafter Kaliyaperumal). With respect to claim 5, Kumar, Anderson and Parsolano do not explicitly teach all limitations of claim 5. However, Kaliyaperumal teaches Claim 5 is rejected for the reasons set forth hereinabove for claim 4, Kumar, Anderson, Parsolano and Kaliyaperumal teach the method of claim 4, wherein one or more frame thumbnails obtained from Figma's representational state transfer architectural style application programming interface (REST API) are saved temporarily until the user completes a Frame selection process (Kaliyaperumal, US 20230095089, Some extractors 118 are configured to use application program interfaces (APIs) associated with the content type to extract the content entities 122 from associated input content 106 (e.g., an extractor 118 associated with FIGMA interface design content uses FIGMA APIs to extract text, controls, objects, and other metadata or content entities from an input FIGMA file). Other types of extractors 118 are configured to use a combination of type-specific APIs, trained analysis models, and/or other methods to extract the content entities 122 from input content 106. For instance, an extractor 118 of a content type 108 that has text extraction APIs but no APIs for extracting objects or shapes is configured to use an object detection model (e.g., like object detection model 126) to extract objects, shapes, and/or structures in association with text that is extracted using the text extraction APIs.). It would have obvious to one having ordinary skill in the art before the effecting filing date of the claimed invention to combine the teachings of cited references. Thus, one of ordinary skill in the art before the effecting filing date of the claimed invention would have been motivated to incorporate Kaliyaperumal into Kumar, Anderson and Parsolano to enable a wide variety of different types of input to be converted into application templates using type-specific content data extractors in an unconventional manner. The method allows developers to quickly obtain an application that has both the components they need and the designs they want, and allows developers that are not familiar with code to feel overwhelmed and unsure about how to get started with developing an application as suggested by Kaliyaperumal (See abstract and summary). Claim 7 is rejected for the reasons set forth hereinabove for claim 1, Kumar, Anderson, Parsolano and Kaliyaperumal teach the method of claim 1, comprising generating get requests to interact with a representational state transfer architectural style application programming interface (REST API) and retrieve design elements (Kaliyaperumal, Some extractors 118 are configured to use application program interfaces (APIs) associated with the content type to extract the content entities 122 from associated input content 106 (e.g., an extractor 118 associated with FIGMA interface design content uses FIGMA APIs to extract text, controls, objects, and other metadata or content entities from an input FIGMA file). Other types of extractors 118 are configured to use a combination of type-specific APIs, trained analysis models, and/or other methods to extract the content entities 122 from input content 106. For instance, an extractor 118 of a content type 108 that has text extraction APIs but no APIs for extracting objects or shapes is configured to use an object detection model (e.g., like object detection model 126) to extract objects, shapes, and/or structures in association with text that is extracted using the text extraction APIs.). It would have obvious to one having ordinary skill in the art before the effecting filing date of the claimed invention to combine the teachings of cited references. Thus, one of ordinary skill in the art before the effecting filing date of the claimed invention would have been motivated to incorporate Kaliyaperumal into Kumar, Anderson and Parsolano to enable a wide variety of different types of input to be converted into application templates using type-specific content data extractors in an unconventional manner. The method allows developers to quickly obtain an application that has both the components they need and the designs they want, and allows developers that are not familiar with code to feel overwhelmed and unsure about how to get started with developing an application as suggested by Kaliyaperumal (See abstract and summary). Claim 16 is rejected for the reasons set forth hereinabove for claim 15, Kumar, Anderson, Parsolano and Kaliyaperumal t teach the system of claim 15, wherein the predetermined language comprises Figma(Kaliyaperumal, fig. 5, para [0073-0080], FIG. 5 is a flowchart illustrating a computerized method 500 for generating an application template (e.g., application template 139) from input content (e.g., input content 106) of a content type (e.g., content type 108). In some examples, the method 500 is executed or otherwise performed in a system such as system 100 of FIG. 1. At 502, the input content of the content type is obtained. For instance, in an example, a user of a system uploads the input content to the system via a content upload interface (e.g., content upload interface 110). In some examples, the input content is of at least one of the following content types: an image type (e.g., a hand-drawn image or a screen shot), a digital document type (e.g., a PDF document), an interface design type (e.g., a FIGMA file), and a presentation design type (e.g., a POWERPOINT presentation file). It would have obvious to one having ordinary skill in the art before the effecting filing date of the claimed invention to combine the teachings of cited references. Thus, one of ordinary skill in the art before the effecting filing date of the claimed invention would have been motivated to incorporate Kaliyaperumal into Kumar, Anderson and Parsolano to enable a wide variety of different types of input to be converted into application templates using type-specific content data extractors in an unconventional manner. The method allows developers to quickly obtain an application that has both the components they need and the designs they want, and allows developers that are not familiar with code to feel overwhelmed and unsure about how to get started with developing an application as suggested by Kaliyaperumal (See abstract and summary). Claim 20 is rejected for the reasons set forth hereinabove for claim 17, Kumar, Anderson and Parsolano teach the system of claim 17, wherein one or more frame thumbnails obtained from Figma's representational state transfer architectural style application programming interface (REST API) are saved temporarily until the user completes a Frame selection process (Kaliyaperumal, US 20230095089, Some extractors 118 are configured to use application program interfaces (APIs) associated with the content type to extract the content entities 122 from associated input content 106 (e.g., an extractor 118 associated with FIGMA interface design content uses FIGMA APIs to extract text, controls, objects, and other metadata or content entities from an input FIGMA file). Other types of extractors 118 are configured to use a combination of type-specific APIs, trained analysis models, and/or other methods to extract the content entities 122 from input content 106. For instance, an extractor 118 of a content type 108 that has text extraction APIs but no APIs for extracting objects or shapes is configured to use an object detection model (e.g., like object detection model 126) to extract objects, shapes, and/or structures in association with text that is extracted using the text extraction APIs.). It would have obvious to one having ordinary skill in the art before the effecting filing date of the claimed invention to combine the teachings of cited references. Thus, one of ordinary skill in the art before the effecting filing date of the claimed invention would have been motivated to incorporate Kaliyaperumal into Kumar, Anderson and Parsolano to enable a wide variety of different types of input to be converted into application templates using type-specific content data extractors in an unconventional manner. The method allows developers to quickly obtain an application that has both the components they need and the designs they want, and allows developers that are not familiar with code to feel overwhelmed and unsure about how to get started with developing an application as suggested by Kaliyaperumal (See abstract and summary). Inquiry 12. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DUY KHUONG THANH NGUYEN whose telephone number is (571)270-7139. The examiner can normally be reached Monday - Friday 0800-1630. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Lewis Bullock can be reached at 5712723759. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DUY KHUONG T NGUYEN/ Primary Examiner, Art Unit 2199
Read full office action

Prosecution Timeline

Jul 04, 2023
Application Filed
Oct 30, 2025
Non-Final Rejection — §103, §112, §DP
Nov 03, 2025
Examiner Interview Summary
Nov 03, 2025
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596634
TESTING A MACHINE LEARNING MODEL
2y 5m to grant Granted Apr 07, 2026
Patent 12596534
Spreadsheet-Based Software Application Development
2y 5m to grant Granted Apr 07, 2026
Patent 12578935
COMPOSITION OF PATTERN-DRIVEN REACTIONS IN REAL-TIME DATAFLOW PROGRAMMING
2y 5m to grant Granted Mar 17, 2026
Patent 12578960
DISTINGUISHING PATTERN DIFFERENCES FROM NON-PATTERN DIFFERENCES
2y 5m to grant Granted Mar 17, 2026
Patent 12572333
Vehicle Electronic Control Device and Program Rewriting Method
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
82%
Grant Probability
99%
With Interview (+35.2%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 539 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month