DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This communication is responsive to the application filed 1/29/2024.
Claims 1-20 are pending with claims 1, 13, and 20 as independent claims.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 3/6/2025, 4/1/2025, and 4/15/2025 were filed after the mailing date of the application on 1/29/2024. The submissions are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 13-19 are rejected under 35 U.S.C. 102(a)(1) based upon being anticipated by Kumar et al. (US 2019/0250891, published 8/15/2019, hereinafter as Kumar).
Claim 13. A method comprising:
obtaining a deployable software implementation of the GUI, wherein the deployable software implementation is compatible with a GUI framework and includes a layout of GUI components; Kumar discloses in [0066-0068] “model generation system 102 may be configured to take GUI screen images 104 as input and automatically generate GUI model 124 using, for example, a model generator 120, a UI component classifier 136, and/or reference information 121 stored in a memory 122.” And in [0075] “in some implementations, GUI model 124 may be described in a data-interchange format that is language independent, such as JavaScript Object Notation (JSON) format… the model information may be encoded in a markup language such as Extensible Markup Language (XML) or jQuery… model generation system 102 may generate one or more XML files that together represent GUI model 124… GUI model 124 may be passed to one or more downstream consumers, for example, code generators 126, 128, and 130, by model generation system 102 without first being persisted to a file.” (emphases added) examiner note: the GUI model may be deployable software implementation generated by the model generation system 102, wherein the GUI model, as an XML file representing GUI framework and layout of the GUI screen images 104, may be input to downstream consumers 103. Thus, the system takes a GUI image to generate an executable code (XML file) that is an input to downstream consumers 103 as shown in fig. 1,
generating, through use of a reverse component mapper, respectively corresponding design artifacts for the GUI components in accordance with the layout; Kumar discloses in [0075-0076] “model generation system 102 may generate one or more XML files that together represent GUI model 124. The generated file(s) may be stored in memory 122 or in some other memory locations accessible to model generation system 102. In certain embodiments, GUI model 124 may be passed to one or more downstream consumers, for example, code generators 126, 128, and 130, by model generation system 102 without first being persisted to a file… GUI model 124 may then be used by one or more downstream model consumers 103. For example, model consumers 103 may be configured to generate one or more GUI implementations 110, 112, and 114 based upon GUI model 124. GUI implementations 110, 112, and 114 may each be based on information specified in GUI model 124.” (emphases added) examiner note: the downstream consumers 103 takes in the GUI 124 (XML file) as input and generates GUI implementation 110 as an output as shown in fig. 1,
populating, based on properties of the GUI components, respectively corresponding properties of the design artifacts; Kumar discloses in [0076-0079] “Since GUI model 124 is generated based upon designed GUI screen images 104, a GUI implementation generated based upon GUI model 124 may have the look and feel and the functionality as described in GUI screen images 104. For example, GUI model 124 may include information specifying a particular GUI window or screen comprising a particular set of UI components and mapped to a particular set of functions or actions. A GUI implementation (e.g., the code or instructions implementing the GUI) generated based upon GUI model 124 may include code and logic for instantiating the particular GUI screen with the particular set of UI components and mapped to the particular set of functions or actions… the GUI implementations may implement GUI screens and associated actions or functions as described by GUI model 124, which in turn is generated based upon GUI screen images 104. For example, if GUI model 124 specifies a particular screen including a set of user interface components arranged in a particular physical layout, then that screen and the particular physical layout may be implemented by the GUI implementation. If the GUI model 124 specifies a particular function for a particular user interface component, then a GUI implementation generated based upon the model may include logic for implementing that particular function and associating the function with the particular user interface component.” (emphases added) examiner note: the populating properties of the artifacts may indicate the arrangement of the UI components on the GUI implementation 110, for example, in similar fashion as the UI components on the GUI screen images 104. In other words, the layout of the GUI components generated based on executing GUI model 124 may be similar to the layout of the GUI components on the GUI screen images 104, and
generating, based on the layout, the design artifacts, and their corresponding properties, a design specification of the GUI that is compatible with a GUI design tool. Kumar discloses in [0078-0080]downstream model consumers 103 may include one or more code generators 126, 128, and 130 that are configured to take GUI model 124 as input and generate code implementations of the GUI, possibly in different programming languages and/or for different platforms, based on, for example, code generation templates 140 for different programming languages and/or for different platforms. A code generator may take GUI model 124 as input and generate code implementing the GUI in a language specific to that code generator. The implementation may be an executable implementation of the GUI executable by one or more processors. For instance, code generator 126 may take model 124 as input and generate a GUI implementation 110 in a first language for a first platform (e.g., for iOS® platform). Code generator 128 may generate GUI implementation 112 in a second language using GUI model 124 for the first platform. Code generator 130 may generate GUI implementation 114 using GUI model 124 for an Android® platform. A GUI implementation may be compiled (or interpreted, or some other processing performed on it) to generate an executable version of the GUI… code generator 126 may be configured to receive one or more files comprising markup code corresponding to GUI model 124 and output a GUI implementation 110 comprising one or more source code files by translating the markup code (e.g., XML) into (high-level) source code (e.g., Java, C++, or other programming language).” And in [0106] “the code may be generated for Angular JS or Bootstrap. The GUI implementation may be an executable implementation of the GUI executable by one or more processors. In some embodiments, a GUI implementation may be compiled (or interpreted, or some other processing performed on it) to generate an executable version of the GUI. Page artifact 312 generated for the GUI may then be made available to end-users.” (emphases added) examiner note: the page artifacts 312 may be the design specification of the GUI components, which is a version of the GUI screen images 104.
Claim 14. The method of claim 13, wherein the layout of GUI components is arranged in a tree-like fashion, and wherein generating the respectively corresponding design artifacts for the GUI components involves a depth-first or breath-first traversal of the layout of GUI components. Kumar discloses in [0106] “the code may be generated for Angular JS or Bootstrap. The GUI implementation may be an executable implementation of the GUI executable by one or more processors. In some embodiments, a GUI implementation may be compiled (or interpreted, or some other processing performed on it) to generate an executable version of the GUI. Page artifact 312 generated for the GUI may then be made available to end-users.’ And in [0115-0116 and 0148] “a model generation system may generate a GUI model for a GUI using the machine learning-based model. For example, in some implementations, each of the input GUI screen images may be processed to identify and extract GUI components including text content items and individual UI components, determine parameters of the text content items and the UI components (e.g., sizes, locations, colors, and the like), and classify the UI components to determine the types of the UI components, using the machine learning-based model(s). The classified UI components and the text content items may then be grouped to form a hierarchy of GUI components. A layout (or cluster map) of a GUI screen may be determined based on the hierarchy… based upon the platform that the GUI may be used on and the target language for the implementation, a code generation template may be selected from available code generation templates (e.g., code generation templates 140 described with respect to FIG. 1)… the GUI model (e.g., in JSON format) can be used to generate code in different programming languages, such as markup languages (e.g., HTML or XML) or stylesheet languages (e.g., cascading style sheet (CSS)).” (emphases added) examiner note: the GUI implementation 110 may represent user interface components specified in XML document (tree-like fashion), which inherently comprises different level of depths like parent and children levels.
Claim 15. The rejection of the method of claim 13 is incorporated, wherein the reverse component mapper associates indications of the GUI components to predefined specifications of the corresponding design artifacts. Kumar discloses in [0063] “UI components 134 may be arranged on a GUI screen image 104 according to a layout or a hierarchical structure, such as a table, a list, a tree structure, a flow chart, an organization chart, and the like. Some UI components 134 may be clickable, selectable, or may otherwise take user input (e.g., user entry), while some other UI components may be static or may not take any user input.” And in [0079, 0145-0148] “GUI implementations 110, 112, and 114 may each correspond to a code generation template that can be used to implement the GUI. A code generation template may include one or more source code files containing high-level code (which may include methods, functions, classes, event handlers, and the like) that can be compiled or interpreted to generate a GUI executable for executing by one or more processors of a computer system. In this manner, a executable implementation of the GUI can be automatically generated based upon GUI model 124, where the executable implementation encapsulates the look and feel of the GUI and the functionalities of the GUI and UI components as described in the GUI design information. For example, code generator 126 may be configured to receive one or more files comprising markup code corresponding to GUI model 124 and output a GUI implementation 110 comprising one or more source code files by translating the markup code (e.g., XML) into (high-level) source code (e.g., Java, C++, or other programming language)… developers may further augment the code template implementation with additional code to complete or enhance (e.g., add additional functionality to) the code base. For example, a code generator may be configured to receive one or more files comprising markup code (e.g., XML) corresponding to the GUI model and output a GUI implementation comprising one or more source code files by translating the markup code into (high-level) source code (e.g., in Java, C++, or other languages). A code implementation may then be compiled (or interpreted, or some other processing performed on it) to generate an executable version of the GUI. In some embodiments, the GUI model (e.g., in JSON format) can be used to generate code in different programming languages, such as markup languages (e.g., HTML or XML) or stylesheet languages (e.g., cascading style sheet (CSS)).” (emphases added) examiner note: arranging GUI components according to a layout or hierarchical structure may be predefined specification such that the model generation 102 may generate GUI model 124 as an executable code to be input to the downstream consumers 103.
Claim 16. The rejection of the method of claim 15 is incorporated, wherein the reverse component mapper also associates property rules to the indications of the GUI components, wherein the property rules define how properties of the corresponding design artifacts are populated. Kumar discloses in [0071-0072 and 0101] “reference information 121 may include various rules that guide the processing performed by model generation system 102. In certain embodiments, reference information 121 may include rules that model generation system 102 may use to determine one or more GUI screens specified for the GUI, and/or for each GUI screen, the set of user interface components included on that screen, and the physical layout of the GUI screen (e.g., rules for UI component and text content item clustering). In the embodiment depicted in FIG. 1, reference information 121 may be stored in memory 122… the grouping may be based on distance and/or similarity between the components. In some embodiments, clustering module 356 may perform the grouping using a set of rules.” (emphases added).
Claim 17. The rejection of the method of claim 13 is incorporated, wherein the design artifacts or the GUI components are specified in JavaScript Object Notation (JSON) metadata or eXtensible Markup Language (XML) metadata as a hierarchy of elements that define the design artifacts or the GUI components and their respective relationships. Kumar discloses in [0055] “a first consumer may use the GUI model to automatically generate an executable for a first platform (e.g., iOS®) and a second consumer may use the same GUI model to automatically generate a second executable for a different platform (e.g., Android®). The GUI model (e.g., in JSON format) can also be used to generate code in different programming languages, such as markup languages (e.g., HTML or XML) or stylesheet languages (e.g., cascading style sheet (CSS)).” (emphases added) examiner note: the output of the downstream consumers 103 may be document represented by an XML code that when executed by a platform browser displays GUI components similar to the input GUI screen images 104.
Claim 18. The rejection of the method of claim 13 is incorporated, wherein generating the respectively corresponding design artifacts for the GUI components comprises applying a one-to-one mapping between the design artifacts and GUI components supported by the GUI design tool. Kumar discloses in [0055] “a first consumer may use the GUI model to automatically generate an executable for a first platform (e.g., iOS®) and a second consumer may use the same GUI model to automatically generate a second executable for a different platform (e.g., Android®). The GUI model (e.g., in JSON format) can also be used to generate code in different programming languages, such as markup languages (e.g., HTML or XML) or stylesheet languages (e.g., cascading style sheet (CSS)).” And in [0075 and 0148] “the model information may be encoded in a markup language such as Extensible Markup Language (XML) or jQuery. For example, model generation system 102 may generate one or more XML files that together represent GUI model 124.” (emphases added) examiner note: the output of the model generation 102 may be GUI model 124 represented by an XML file that may be an input to the downstream consumers 103 which output code in an XML format. Thus, the output of the model generation 102 and the output of the downstream model 103 may be 0ne-one or they have similar layouts.
Claim 19. The rejection of the method of claim 13 is incorporated, wherein generating the design specification of the GUI comprises generating, for each of the design artifacts and their respectively corresponding properties, metadata that, when interpreted, cause the GUI design tool to display a representation of the GUI. Kumar discloses in [0105-0106 and 0145] “After the text analysis to identify and change the type associated with clickable text content items, the cluster map may be updated and fed to a metadata generator 362. Metadata generator 362 may generate a GUI model for the GUI that may include one or more GUI screens. The GUI model may be an optimum representation of the GUI screen images that are submitted to server subsystem 320. In some embodiments, metadata generator 362 may generate the GUI model in a data-interchange format that is language independent, such as JavaScript Object Notation (JSON) format. The GUI model (e.g., described in JSON metadata) may then be sent to client subsystem 310 through REST service 340 as the response to the request from client subsystem 310… After receiving a GUI model 306, client subsystem 310 may sent the GUI model (e.g., in JSON metadata) to a page generator 308. Page generator 308 may include a code generator (e.g., code generator 126, 128, or 130) as described above with respect to FIG. 1. Page generator 308 may take GUI model 306 as input and generate code implementing the GUI in a target language for a target platform, such as a mobile device that is operated using iOS® or Android® or a system with a wide-screen that is operated using iOS®, Windows®, or Linux. For example, the code may be generated for Angular JS or Bootstrap… a GUI implementation may be compiled (or interpreted, or some other processing performed on it) to generate an executable version of the GUI. Page artifact 312 generated for the GUI may then be made available to end-users… the source code for implementing the GUI may be generated based on certain code generation templates. For example, various code generator applications (e.g., code generators 126, 128, and 130) may take the GUI model as input and generate code for implementing the GUI, possibly in different programming languages and/or for different platforms, based on, for example, code generation templates 140 for different programming languages and/or for different platforms.” (emphases added) examiner note: the output of the downstream consumers 103 may be document represented by an XML code that when executed by a platform browser displays GUI components similar to the input GUI screen images 104 for the corresponding platform such that each platform has its own properties represented by code generation template associated with each platform.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-12 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Shi et al. (US 2023/0376284, published 11/23/2023, hereinafter as Shi) in view of Lu et al. (US 2011/0131491, published 6/2/2011, hereinafter as Lu) in view of Kumar et al. (US 2019/0250891, hereinafter as Kumar).
Claim 1. A method comprising:
obtaining [a design specification] of a graphical user interface (GUI), wherein the design specification is compatible with a GUI design tool and includes a layout of design artifacts; Shi discloses in [0068-0072] “the entry as shown in the dashed box of FIG. 6 may correspond to the UI element “Session Name” of the text field type as shown in FIG. 5 and describes a group of attributes that describe the key information of that UI element from the perspective of help content… descriptor file generator 401 is configured to generate the descriptor file based on image recognition on the design prototype(s), for example, by utilizing an image analyzer that identifies key information related to help content from design prototype(s) based on image understanding, which is useful especially in case that the front-end code is not available for confidentiality and other reasons. For example, descriptor file generator 401 is configured to perform image recognition on the design prototype(s) to recognize at least one UI element and identify the group of attributes and corresponding values for the at least one UI element.” (emphases added) examiner note: the design prototype may be a design specification of a GUI such as UI element “Session Name” layout shown in fig. 5. The descriptor file generator 401 obtains (as an input) the design specification,
generating, through use of a component mapper, respectively corresponding GUI components for the design artifacts in accordance with the layout; Shi discloses in [0068-0072] “descriptor file generator 401 is configured to generate the descriptor file based on image recognition on the design prototype(s), for example, by utilizing an image analyzer that identifies key information related to help content from design prototype(s) based on image understanding, which is useful especially in case that the front-end code is not available for confidentiality and other reasons. For example, descriptor file generator 401 is configured to perform image recognition on the design prototype(s) to recognize at least one UI element and identify the group of attributes and corresponding values for the at least one UI element.” (emphases added) examiner note: the descriptor file generator 401 generates a descriptor file as a corresponding GUI component shown in fig. 6,
populating, based on properties of the design artifacts, respectively corresponding properties of the GUI components; Shi discloses in [0078, 0093, and 0097] “some attributes can be filled with values that are identifiable or recognizable from the prototype(s). As for attribute values that cannot be identified from the prototype(s), they can be left as empty. Accordingly, a descriptor file including attribute-value pairs for at least one UI element of the UI can be generated based on image recognition on the design prototype(s)… a text field is usually described based on attributes such as: i) “definition” attribute, which is associated with a value indicating what to specify in the field (or what the field is used for), ii) “inputType” attribute, which is associated with a value indicating the supported characters, such as alphanumeric; iii) “minlength” or “maxlength” attribute which is associated with a value indicating the minimum or maximum number of characters that can be filled in, and the like… a UI element may present different content to different user roles, and in such case, a content provider can add an attribute “role” to the UI element, and descriptor file updater 404 may update raw descriptor file generated by descriptor file generator 401 by adding the attribute “role” to the UI elements and the value thereof” (emphases added) examiner note: the attributes (roles) may be properties of GUI components, and
Shi does not explicitly disclose
a design specification of a graphical user interface (GUI). However, Kumar, in an analogous art, discloses in [0007-0011] “an image of a GUI screen (also referred to as GUI screen image) designed by, for example, a GUI designer, may be analyzed to extract text information from the GUI screen image and to identify the UI components included in the GUI screen… A GUI model may then be generated for the GUI for the application based upon, for example, the detected UI components, the types of the UI components, the locations of the UI components, the associated text information for the UI components, and additional text information that may not be associated with any UI component. The GUI model may be language-independent and platform-independent… the GUI model may be described in a data-interchange format that is language-independent, such as the JavaScript Object Notation (JSON) format. In some implementations, the GUI model may be generated as metadata that can be associated with an application.” (emphases added) examiner note: the design specification of GUI may be specified in a specification document such as the JavaScript Object Notation (JSON) format,
generating, based on the layout, the GUI components, and their corresponding properties, a deployable software implementation of the GUI. Further, Kumar discloses in [0016] “A GUI model generated for a GUI based upon the GUI design information can be used by various downstream consumers. For example, a downstream consumer may use the model to, automatically and substantially free of any manual coding, generate code for implementing the GUI. The code may be an executable program executable by one or more processors or an interpretable program that can be interpreted by, for example, a web browser, to display a GUI having a look-and-feel and/or functionality that is substantially similar to the desired look-and-feel and/or functionality depicted in the set of images that were used to generate the GUI model. The same GUI model can be used by different consumers. For example, a first consumer may use the GUI model for automatically generating an executable for a first platform (e.g., iOS®) and a second consumer may use the same GUI model to automatically generate a second executable for a different platform (e.g., Android®). The GUI model (e.g., in JSON format) can also be used to generate code in different programming languages, such as markup languages (e.g., HTML or XML) or stylesheet languages (e.g., cascading style sheet (CSS)).” (emphases added) examiner note: the generated model may be used deployable code to be executed by a consumer web browser.
Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Shi with the teaching of Kumar for providing “techniques for automating the development of a graphic user interface (GUI) for an application from design documents, such as one or more images or sketches for one or more GUI screens of the application” to save time and money because “quite often, developers with specific expertise are needed for the GUI development.” Kumar [0002-0005].
Claim 2. The rejection of the method of claim 1 is incorporated, Shi does not explicitly disclose Shi does not explicitly disclose wherein the layout of the design artifacts is arranged in a tree-like fashion, and wherein generating the respectively corresponding GUI components for the design artifacts involves a depth-first or breath-first traversal of the layout of the design artifacts. However, Kumar discloses in [0012 and 0016] “the model may also include information about the structure of the GUI screen, such as information identifying a hierarchical organization of the user interface components and text content items on the GUI screen. For example, in some embodiments, UI components may be grouped based on, for example, the types and locations of the UI components, to form subgroups of UI components (e.g., a table or a list). The subgroups may be further clustered to determine a higher level layout of the GUI screen… The same GUI model can be used by different consumers. For example, a first consumer may use the GUI model for automatically generating an executable for a first platform (e.g., iOS®) and a second consumer may use the same GUI model to automatically generate a second executable for a different platform (e.g., Android®). The GUI model (e.g., in JSON format) can also be used to generate code in different programming languages, such as markup languages (e.g., HTML or XML) or stylesheet languages (e.g., cascading style sheet (CSS)).” (emphases added) examiner note: the user interface component may be specified in XML document (tree-like fashion), which inherently comprises different level of depths like parent and children levels.
Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Shi with the teaching of Kumar for providing “techniques for automating the development of a graphic user interface (GUI) for an application from design documents, such as one or more images or sketches for one or more GUI screens of the application” to save time and money because “quite often, developers with specific expertise are needed for the GUI development.” Kumar [0002-0005].
Claim 3. The rejection of the method of claim 1 is incorporated, wherein the component mapper associates indications of the design artifacts with predefined specifications of the corresponding GUI components. Shi discloses in [0117] “a help content training document can be annotated to generate annotations indicating mappings between expressions in help content of each UI element in the help content training document and values in the attribute-value pairs of the UI element in a corresponding training descriptor file.” (emphases added) examiner note: generating association between UI elements (GUI components) and values in attribute value pairs (design artifacts).
Claim 4. The rejection of the method of claim 3 is incorporated, wherein the component mapper also associates property rules to the indications of the design artifacts, wherein the property rules define how properties of the corresponding GUI components are populated. Shi discloses in [0069-0070] a descriptor file is used to describe the UI from the perspective of help content and includes attribute-value pairs for each UI element of the UI. There can be various ways and formats to construct the descriptor file, for example, the descriptor file can be in any data-exchange format. In one example of the present disclosure, JSON is used as a data-exchange format for constructing the descriptor file, which is illustrated in FIG. 6… As shown in the dashed box of FIG. 6, the group of attributes may serve as the “skeleton” for each UI element of the UI, and each attribute is associated with a corresponding value to form multiple attribute-value pairs for the UI element, for example, the attribute “elementType” is associated with a value of “text field”, and so forth. It should be noted that although FIG. 6 only illustrates one entry of the descriptor file corresponding to one UI element, the descriptor file of the present disclosure may include other entries for other UI elements of the UI (for example, for UI elements of other UI element types) in a similar way.” (emphases added) examiner note: attributes may be property rules that indicate attribute “element type” as design artifact to correspond to “text field” as GUI component.
Claim 5. The rejection of the method of claim 1 is incorporated, wherein populating the respectively corresponding properties of the GUI components comprises invoking a natural language processing (NLP) model that infers the corresponding properties. Shi discloses in [0065] “after the descriptor file is generated from at least one of design prototype(s) and front-end code, a trained natural language generation (NLG) model can be used to transform it into a help content document automatically. In such a way, the content providers do not have to draft the help content document manually, which reduces the time and effort spent by the content providers and thus accelerates the UI product to come into the market. In addition, when there is a change to the design prototype or front-end code of the UI, the descriptor file can be updated automatically, which will then be transformed to help content document to reflect the change in an efficient way and allows for easier maintenance of the UI help content document generation.” (emphases added) examiner note: the automatic use of the NLG model may indicate invoking the NLP model to update the descriptor file.
Claim 6. The rejection of the method of claim 5 is incorporated, wherein the NLP model is a transformer-based large language model that was trained on instances of the properties of the GUI components. Shi discloses in [0091] “Help content transformer 402 is configured to transform the descriptor file into a help content document for the UI using a trained natural language generation (NLG) model.” (emphases added).
Claim 7. The rejection of the method of claim 5 is incorporated, wherein invoking the NLP model comprises adding a textual representation of at least some of the properties of the design artifacts to a pre-defined NLP prompt. Shi discloses in [0095-0096] “Descriptor file updater 404 is configured to update the descriptor file generated by descriptor file generator 401 by acquiring attribute(s) or value(s) manually input by a content provider… descriptor file updater 404 may acquire some attribute values related to real-world knowledge or other information that cannot be extracted from the design prototype(s) or the front-end code of the UI, and update the raw descriptor file generated by descriptor file generator 401 based on the attribute values.” (emphases added).
Claim 8. The rejection of the method of claim 1 is incorporated, Shi does not explicitly disclose wherein the design artifacts or the GUI components are specified in JavaScript Object Notation (JSON) metadata or eXtensible Markup Language (XML) metadata as a hierarchy of elements that define the design artifacts or the GUI components and their respective relationships. However, Kumar discloses in [0012 and 0016] “the model may also include information about the structure of the GUI screen, such as information identifying a hierarchical organization of the user interface components and text content items on the GUI screen. For example, in some embodiments, UI components may be grouped based on, for example, the types and locations of the UI components, to form subgroups of UI components (e.g., a table or a list). The subgroups may be further clustered to determine a higher level layout of the GUI screen… The same GUI model can be used by different consumers. For example, a first consumer may use the GUI model for automatically generating an executable for a first platform (e.g., iOS®) and a second consumer may use the same GUI model to automatically generate a second executable for a different platform (e.g., Android®). The GUI model (e.g., in JSON format) can also be used to generate code in different programming languages, such as markup languages (e.g., HTML or XML) or stylesheet languages (e.g., cascading style sheet (CSS)).” (emphases added) examiner note: the user interface component may be specified in XML document (tree-like fashion), which inherently comprises different level of depths like parent and children levels.
Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Shi with the teaching of Kumar for providing “techniques for automating the development of a graphic user interface (GUI) for an application from design documents, such as one or more images or sketches for one or more GUI screens of the application” to save time and money because “quite often, developers with specific expertise are needed for the GUI development.” Kumar [0002-0005].
Claim 9. The rejection of the method of claim 1 is incorporated, wherein generating the respectively corresponding GUI components for the design artifacts comprises applying a one-to-one mapping between the GUI components and the design artifacts supported by the GUI design tool. Shi discloses in [0117] “a help content training document can be annotated to generate annotations indicating mappings between expressions in help content of each UI element in the help content training document and values in the attribute-value pairs of the UI element in a corresponding training descriptor file.” (emphases added) examiner note: mapping between UI in the help document and value in the attribute-value pair may be one-to-one mapping.
Claim 10. The rejection of the method of claim 1 is incorporated, wherein generating the deployable software implementation of the GUI comprises generating, for each of the GUI components and their respectively corresponding properties, metadata or software code that, when interpreted or executed, cause a representation of the GUI to be created. Shi discloses in [0117] “a help content training document can be annotated to generate annotations indicating mappings between expressions in help content of each UI element in the help content training document and values in the attribute-value pairs of the UI element in a corresponding training descriptor file.” (emphases added) examiner note: generating annotation may indicate generating metadata.
Claim 11. The rejection of the method of claim 1 is incorporated, further comprising:
Shi does not explicitly disclose
transmitting, to a server-based hosting environment, the deployable software implementation of the GUI; However, Kumar discloses in [0012 and 0016] “the model may also include information about the structure of the GUI screen, such as information identifying a hierarchical organization of the user interface components and text content items on the GUI screen. For example, in some embodiments, UI components may be grouped based on, for example, the types and locations of the UI components, to form subgroups of UI components (e.g., a table or a list). The subgroups may be further clustered to determine a higher level layout of the GUI screen… The same GUI model can be used by different consumers. For example, a first consumer may use the GUI model for automatically generating an executable for a first platform (e.g., iOS®) and a second consumer may use the same GUI model to automatically generate a second executable for a different platform (e.g., Android®). The GUI model (e.g., in JSON format) can also be used to generate code in different programming languages, such as markup languages (e.g., HTML or XML) or stylesheet languages (e.g., cascading style sheet (CSS)).” (emphases added) examiner note: the model may deployed as an executable code to a consumer web browser, and
arranging the server-based hosting environment so that client devices can remotely interact with the deployable software implementation of the GUI. Further, Kumar discloses in [0106] “The GUI implementation may be an executable implementation of the GUI executable by one or more processors. In some embodiments, a GUI implementation may be compiled (or interpreted, or some other processing performed on it) to generate an executable version of the GUI. Page artifact 312 generated for the GUI may then be made available to end-users.” (emphases added) examiner note: the assisting apparatus 100 may be client device represented by a browser as shown in fig. 1. The help information providing apparatus 200 may be a remote server-based hosting environment.
Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Shi with the teaching of Kumar for providing “techniques for automating the development of a graphic user interface (GUI) for an application from design documents, such as one or more images or sketches for one or more GUI screens of the application” to save time and money because “quite often, developers with specific expertise are needed for the GUI development.” Kumar [0002-0005].
Claim 12. The rejection of the method of claim 1 is incorporated, wherein the GUI components and the corresponding properties of the GUI components are represented in an implementation-neutral design format that is different from formats compatible with the GUI design tool and also different from formats supported by the deployable software implementation of the GUI. Shi discloses in [0004-0005] “a descriptor file for a UI can be generated based on at least one of a design prototype and front-end code of the UI. The descriptor file describes a group of attributes related to help content for at least one UI element of the UI and includes attribute-value pairs for the at least one UI element of the UI. Also in this method, the descriptor file can be transformed into a help content document for the UI using a trained natural language generation (NLG) model… the generating the descriptor file for the UI based on the at least one of the design prototype and the front-end code of the UI may include performing image recognition on the design prototype to recognize the at least one UI element and identify the group of attributes and corresponding values for the at least one UI element… the generating the descriptor file for the UI based on the at least one of the design prototype and the front-end code of the UI may include performing code parsing on the front-end code to identify the at least one UI element and the group of attributes and corresponding values for the at least one UI element.” And in [0069] “the descriptor file can be in any data-exchange format. In one example of the present disclosure, JSON is used as a data-exchange format for constructing the descriptor file, which is illustrated in FIG. 6.” And in [0099-0102] “the types of help content documents may include but not limited to: [0100] A tutorial document that provides task-like instructions telling users how to operate with the UI. [0101] A reference document lists all the UI elements on the UI and provides description for each element. [0102] Contextual help displayed on the UI to explain particular elements, e.g. hover text, tooltips, callouts, etc.” (emphases added) examiner note: the implementation-neutral design may be descriptor file and the format may be JSON format. The input may be in image format indicated by performing image recognition and/or code parsing. The output may be a tutorial document, a reference document and may be in standard language format such as HTML. Thus, the descriptor file as implementation-neutral design may be in JSON format which is different from the input and the output formats.
Claim 20. The claim is directed towards a non-transitory computer-readable medium, having stored thereon program instructions, for implementing the method of claim 1, therefore, is similarly rejected as claim 1.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. See PTO-892.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to AHAMED I NAZAR whose telephone number is (571)270-3174. The examiner can normally be reached 10 am to 7 pm Mon-Fri.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Stephen Hong can be reached at 571-272-4124. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/AHAMED I NAZAR/Examiner, Art Unit 2178 2/2/2026
/STEPHEN S HONG/Supervisory Patent Examiner, Art Unit 2178