DETAILED ACTION
This action is in response to the claims filed November 18, 2025. Claims 1, 2, 6-9, 13, and 15 are pending. Claims 1 and 7 are independent claims. Claims 1, 2, 6-9, 13, and 15 have been amended. Claims 3-5, 10-12, 14, and 16-20 have been cancelled.
The claim interpretation under 35 U.S.C. 112(f) is withdrawn in view of Applicant’s amendments to the claims.
The claim rejections under 35 U.S.C. 101 is maintained in view of Applicant’s arguments and amendments to the claims.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-2, 6-9, 13, and 15 rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Regarding claim 1, the limitations “determine whether there is a panel that matches the panel image information in the panel library memory”, “generate a micro-program based on the target panel …”, “generate the target panel based on the panel image information and the preset blank panel”, “…determine panel type information based on the panel image information”, “based on the panel type information, search whether there is a panel that corresponds to the panel type information in a preset panel library memory”, “according to a search result, determine whether there is a panel that matches the panel image information in the preset panel library memory”, “determine whether the panel type information is a text panel type”, “if the panel type information is a text panel type, extract text information from the panel image information and insert the text information into a preset blank text panel to generate the target panel”, and “if the panel type information is not a text panel type, insert the panel image information into a preset blank picture panel to generate the target panel” as drafted, are functions that, under their broadest reasonable interpretation, recite the abstract idea of mental processes. The limitation encompasses a human mind carrying out the functions through observation, evaluation, judgement, and/or opinion, or even with the aid of pen and paper. Thus, these limitations recite and fall under the “Mental Processes” grouping of abstract ideas under Prong 1.
Under Prong 2, this judicial exception is not integrated into a practical application. The additional limitations “A computer device, comprising: a processor and a memory storing a computer program that is runnable on the processor, wherein the memory comprises: a panel library memory, the processor comprises a micro-program center processor, and a visualized configuration system processor, wherein”, “through the visualized configuration system”, and “wherein the micro-program center processor is further configured to” are recited at a high level of generality such that it amounts to no more than mere instructions to apply the exception using a generic computer, and/or mere computer components. See MPEP 2106.05(f). The additional limitation “the panel library memory is configured to store a panel” do nothing more than add the insignificant extra solution activity of merely storing data to memory to the judicial exception. See MPEP 2106.05(g). The limitations “the micro-program center processor is configured to obtain a page design image comprising panel image information”, “if there is a panel that matches the panel image information in the panel library memory, acquire the panel that matches the panel image information as a target panel”, and “wherein the micro-program center processor is further configured to acquire a preset blank panel from the panel library memory if there is no panel that matches the panel image information in the panel library memory, the preset blank panel comprising at least one of a preset blank text panel and a preset blank picture panel” do nothing more than add the insignificant extra solution activity of merely gathering and transmitting data to the judicial exception. See MPEP 2106.05(g). Accordingly, the additional elements do not integrate the recited judicial exception into a practical application and the claim is therefore directed to the judicial exception.
Under Step 2B, the claims do not include additional elements that are sufficient to amount to significantly more under the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements “A computer device, comprising: a processor and a memory storing a computer program that is runnable on the processor, wherein the memory comprises: a panel library memory, the processor comprises a micro-program center processor, and a visualized configuration system processor, wherein”, “through the visualized configuration system”, and “wherein the micro-program center processor is further configured to” amount to no more than mere instructions, or generic computer/computer components to carry out the exception. For the limitation “the panel library memory is configured to store a panel”, the courts have identified storing and retrieving information from memory to be well-known, routine, and conventional activity. See MPEP 2106.05(d). For the limitations “the micro-program center processor is configured to obtain a page design image comprising panel image information”, “if there is a panel that matches the panel image information in the panel library memory, acquire the panel that matches the panel image information as a target panel”, and “wherein the micro-program center processor is further configured to acquire a preset blank panel from the panel library memory if there is no panel that matches the panel image information in the panel library memory, the preset blank panel comprising at least one of a preset blank text panel and a preset blank picture panel”, the courts have identified merely gathering and transmitting data to be well-known, routine, and conventional activity. See MPEP 2106.05(d). Accordingly, the claims are not patent eligible under 35 U.S.C. §101.
Regarding claim 2, the limitation “and perform a configuration operation on the target panel in response to a user operation to generate the micro-program, wherein the configuration operation comprises at least one of display parameter configuration, data bonding, event bonding, and component bonding” is an additional mental step. The limitation “the memory further comprises: a component center configured to store a component” amounts to merely storing data to memory, which does not amount to practical application under Prong 2, nor to significantly more under Step 2B, as discussed above. The limitation “the micro-program center processor is configured to display the target panel through the visualized configuration system processor” amounts to the insignificant post-solution activity of data output, which does not amount to practical application under Prong 2, nor to significantly more under Step 2B. See MPEP 2106.05(g) and 2106.05(d). The additional elements of “computer device” and “processor” merely applies a generic computer/computer components to the judicial exception which does not amount to practical application under Prong 2, nor to significantly more under Step 2B, as explained above.
Regarding claim 6, the limitation “to package the micro-program to generate a cross-platform application; wherein when the cross-platform application is run in different operating systems, a page comprising the target panel that is suitable for the different operating systems is generated” is an additional mental step. The element “the computer device according to claim 1, the processor further comprises: a product center configured to” merely applies a generic computer component to the judicial exception, which does not amount to practical application under Prong 2, nor to significantly more under Step 2B, as discussed above.
Regarding claim 7 the limitations “determining, …, whether there is a panel that matches the panel image information in a panel library memory”, “generating,…, a micro-program based on the target panel…”, “generating,…, the target panel based on the panel image information and the preset blank panel, wherein the determining,…, whether there is the panel that matches the panel image information in the panel library memory comprises”, “determining,…, panel type information based on the panel image information”, “searching, … based on the panel type information, whether there is a panel that corresponds to the panel type information in a preset panel library memory”, “according to a search result, determining,…, whether there is a panel that matches the panel image information in the preset panel library memory”, and “wherein the generating,…, the target panel based on the panel image information and the preset blank panel comprises: determining,…, whether the panel type information is a text panel type; if the panel type information is a text panel type, extracting,…, text information from the panel image information and inserting the text information into a preset blank text panel to generate the target panel; and if the panel type information is not a text panel type, inserting,…, the panel image information into a preset blank picture panel to generate the target panel” as drafted, are functions that, under their broadest reasonable interpretation, recite the abstract idea of mental processes. The limitation encompasses a human mind carrying out the functions through observation, evaluation, judgement, and/or opinion, or even with the aid of pen and paper. Thus, these limitations recite and fall under the “Mental Processes” grouping of abstract ideas under Prong 1.
Under Prong 2, this judicial exception is not integrated into a practical application. The additional limitations “A computer device, comprising: a processor and a memory storing a computer program that is runnable on the processor, wherein when the processor executes the program, a micro-program generation method is implemented, the method is applied to an application development platform, wherein the method comprises”, “by the processor”, and “…through a visualized configuration system processor” are recited at a high level of generality such that it amounts to no more than mere instructions to apply the exception using a generic computer, and/or mere computer components. See MPEP 2106.05(f). The limitations “obtaining, by the processor, a page design image comprising panel image information”, “if there is a panel that matches the panel image information in the panel library memory, acquiring, by the processor, the panel that matches the panel image information as a target panel”, and “if there is no panel that matches the panel image information in the panel library memory, acquiring, by the processor, a preset blank panel from the panel library memory, the preset blank panel comprising: at least one of a preset blank text panel and a preset blank picture panel” do nothing more than add the insignificant extra solution activity of merely gathering and transmitting data to the judicial exception. See MPEP 2106.05(g). Accordingly, the additional elements do not integrate the recited judicial exception into a practical application and the claim is therefore directed to the judicial exception.
Under Step 2B, the claims do not include additional elements that are sufficient to amount to significantly more under the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements “A computer device, comprising: a processor and a memory storing a computer program that is runnable on the processor, wherein when the processor executes the program, a micro-program generation method is implemented, the method is applied to an application development platform, wherein the method comprises”, “by the processor”, and “…through a visualized configuration system processor” amount to no more than mere instructions, or generic computer/computer components to carry out the exception. For the limitations “obtaining, by the processor, a page design image comprising panel image information”, “if there is a panel that matches the panel image information in the panel library memory, acquiring, by the processor, the panel that matches the panel image information as a target panel”, and “if there is no panel that matches the panel image information in the panel library memory, acquiring, by the processor, a preset blank panel from the panel library memory, the preset blank panel comprising: at least one of a preset blank text panel and a preset blank picture panel”, the courts have identified merely gathering and transmitting data to be well-known, routine, and conventional activity. See MPEP 2106.05(d). Accordingly, the claims are not patent eligible under 35 U.S.C. §101.
Regarding claim 8, the limitations “in response to a user operation, performing a configuration operation on the target panel through the visualized configuration system processor to generate the micro-program, wherein the configuration operation comprises at least one of display parameter configuration, data bonding, event bonding, and component bonding” is an additional mental step. The limitation “displaying… the target panel through the visualized configuration system” amounts to the insignificant post-solution activity of data output, which does not amount to practical application under Prong 2, nor to significantly more under Step 2B. See MPEP 2106.05(g) and 2106.05(d). The limitation “the computer device according to claim 7, wherein the generating, by the processor, the micro-program based on the target panel through the visualized configuration system processor” merely applies a generic computer/computer components to the judicial exception which does not amount to practical application under Prong 2, nor to significantly more under Step 2B, as explained above.
Regarding claim 9, the limitation “wherein a data format of the micro-program is a JavaScript Object Notation (JSON) data format based on Domain Specific Language (DSL) model” merely further describes the micro-program of the mental step of claim 7. The additional element of “the computer device” merely applies a generic computer/computer components to the judicial exception which does not amount to practical application under Prong 2, nor to significantly more under Step 2B, as explained above.
Regarding claim 13, the limitations “the determining the panel type information based on the panel image information comprises:”, “performing image segmentation on the page design image to obtain the panel image information”, and “performing image recognition on the panel image information to obtain the panel type information” are additional mental steps. The additional elements of “computer device” and “by the processor” merely applies a generic computer/computer components to the judicial exception which does not amount to practical application under Prong 2, nor to significantly more under Step 2B, as explained above.
Claim 15 does not recite additional mental steps. The limitation “A non-transitory computer-readable storage medium, comprising a stored program, wherein acts of the micro-program generation method according to claim 7 are implemented when the program is run” merely applies a generic computer/computer components to the judicial exception which does not amount to practical application under Prong 2, nor to significantly more under Step 2B, as explained above.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 2, 6-8, 13, and 15 are rejected under 35 U.S.C. 103 as being unpatentable over US 20180203571 A1 (hereinafter “Dayanandan”) in view of US 20030025732 A1 (hereinafter “Prichard”).
Regarding claim 1, Dayanandan discloses:
A computer device, comprising:
a processor and a memory storing a computer program that is runnable on the processor, wherein the memory comprises (Fig. 24):
a panel library memory, the processor comprises a micro-program center processor, …, wherein the panel library memory is configured to store a panel (Fig. 1, Fig 2 [the processor comprises a micro-program center]; Paragraph [0087], “In certain embodiments, partition analyzer 205 may also use reference information 121 (e.g., reference information stored in memory 122) to process the partitions, such as information 214 about known GUI components, information 216 about known GUI functions, and the like. For example, known information 214 may include information about different GUI components (i.e., GUI component information) that are commonly found in GUIs. For example, known information 214 may include information identifying various GUI components (e.g., buttons, text boxes, drop-down lists) and their associated characteristics. For example, for each known GUI component, information 214 may include a template GUI component (i.e., an example GUI component) that may be used to perform template matching with detected features of the image. GUI component information 214 may additionally include known phrases and known icons, which may also be used to identify text buttons and image buttons that are commonly found in similar GUIs. In some embodiments, GUI component information 214 may be organized into multiple dictionaries, where each dictionary corresponds to a a particular domain (i.e., domain specific dictionaries) [a panel library memory…wherein the panel library memory is configured to store a panel]”);
- the micro-program center processor is configured to obtain a page design image comprising panel image information (Paragraph [0066], “Model generator subsystem 120 is then configured to generate a GUI model 124 that captures the information determined by model generator subsystem 120 from the analysis of image 104. The information determined by model generator subsystem 120 and represented in GUI model 124 may include information related to the look-and-feel of the GUI screen. In certain embodiments, determining the look-and-feel information for a GUI screen may include partitioning the image into one or more partitions, determining a set of GUI components (e.g., buttons, drop down lists, segments, etc.) that are included in each of the partitions and their attributes (e.g., labels, sizes) [the micro-program center processor is configured to obtain a page design image comprising panel image information]”);
- determine whether there is a panel that matches the panel image information in the panel library memory (Paragraph [0152], “In general, a model generator may store templates of GUI components in the memory and match these templates to detected features. The stored templates may account for different styles. For example, a model generator may store a switch button template for matching switch buttons on the Android platform and another switch button template for matching switch buttons on the iOS platform. The stored templates may account for different component states”; Paragraph [0164], “After no more GUI components can be identified from the detected features, the component identifier may designate the remaining features as static text or images [determine whether there is a panel that matches the panel image information in the panel library memory]”);
- if there is a panel that matches the panel image information in the panel library memory, acquire the panel that matches the panel image information as a target panel (Paragraph [0151], “Thus, a component identifier may attempt to identify GUI components with visual indicators before other types of GUI components (e.g., text buttons, image buttons, edit texts). After identifying GUI components with visual indicators, the partition analyzer may identify other types of GUI components”; Paragraph [0152], “Thus, the model generator may store a switch button template for each of the states of the switch button. The component identifier may identify a switch button from a feature based on whether the feature matches the switch button template”; Paragraph [0164], “After no more GUI components can be identified from the detected features, the component identifier may designate the remaining features as static text or images.” [if there is a panel that matches the panel image information in the panel library memory, acquire the panel that matches the panel image information as a target panel]) [Examiner’s remarks: The panel (component) that matches the panel image information from a saved library (templates) is selected as the target panel for later code generation.]; and
- generate a micro-program based on the target panel … (Paragraph [0077], “In certain embodiments, GUI implementations 110, 112, and 114 may each correspond to a code template that implements at least a portion of the application's GUI related functionality. The code template may be made up of one or more source code files containing high-level code (which may comprise methods, functions, classes, event handlers, etc.) that can be compiled or interpreted to generate an executable that can be executed by one or more processors of a computer system”; Paragraph [0078], “GUI implementations 110, 112, and 114 may each be based on information specified in GUI model 124. For example, if the GUI model specifies that a GUI window or screen comprises a particular set of GUI components, the source code that is generated for the GUI window may include code logic for instantiating a GUI screen including each of the GUI components [generate a micro-program based on the target panel …]”);
- wherein the micro-program center processor is further configured to acquire a preset blank panel from the panel library memory if there is no panel that matches the panel image information in the panel library memory, the preset blank panel comprising at least one of a preset blank text panel and a preset blank picture panel (Paragraph [0087], “For example, known information 214 may include information identifying various GUI components (e.g., buttons, text boxes, drop-down lists) and their associated characteristics. For example, for each known GUI component, information 214 may include a template GUI component (i.e., an example GUI component) that may be used to perform template matching with detected features of the image. GUI component information 214 may additionally include known phrases and known icons, which may also be used to identify text buttons and image buttons that are commonly found in similar GUIs. In some embodiments, GUI component information 214 may be organized into multiple dictionaries, where each dictionary corresponds to a a particular domain (i.e., domain specific dictionaries)”; Paragraph [0164], “After no more GUI components can be identified from the detected features, the component identifier may designate the remaining features as static text or images [wherein the micro-program center processor is further configured to acquire a preset blank panel from the panel library memory if there is no panel that matches the panel image information in the panel library memory, the preset blank panel comprising at least one of a preset blank text panel and a preset blank picture panel]”) [Examiner’s remarks: The features are matched with GUI components stored in dictionaries, and when the components do not match a stored component, they are assigned either static text or image (preset blank text panel and picture panel).]; and
- generate the target panel based on the panel image information and the preset blank panel (Paragraph [0077], “In certain embodiments, GUI implementations 110, 112, and 114 may each correspond to a code template that implements at least a portion of the application's GUI related functionality. The code template may be made up of one or more source code files containing high-level code (which may comprise methods, functions, classes, event handlers, etc.) that can be compiled or interpreted to generate an executable that can be executed by one or more processors of a computer system”; Paragraph [0078], “GUI implementations 110, 112, and 114 may each be based on information specified in GUI model 124. For example, if the GUI model specifies that a GUI window or screen comprises a particular set of GUI components, the source code that is generated for the GUI window may include code logic for instantiating a GUI screen including each of the GUI components [generate the target panel based on the panel image information and the preset blank panel]”) [Examiner’s remarks: The target panel (GUI) is generated based on the assigned components (blank text or image component).];
- wherein the micro-program center processor is further configured to determine panel type information based on the panel image information (Paragraph [0087], “For example, for each known GUI component, information 214 may include a template GUI component (i.e., an example GUI component) that may be used to perform template matching with detected features of the image. GUI component information 214 may additionally include known phrases and known icons, which may also be used to identify text buttons and image buttons that are commonly found in similar GUIs. In some embodiments, GUI component information 214 may be organized into multiple dictionaries, where each dictionary corresponds to a a particular domain (i.e., domain specific dictionaries). For example, the information 214 may comprise a dictionary specific to the database domain (e.g., icons and phrases commonly occurring in database-related GUIs), another dictionary specific to the mobile application domain (e.g., icons and phrases commonly occurring in mobile application GUIs), another dictionary specific to the email domain (e.g., icons and phrases commonly occurring in email-related applications), and so on. The first dictionary may include phrases and icons that are commonly found in database application GUIs while the second dictionary may include phrases and icons that are commonly found in mobile application GUIs [wherein the micro-program center processor is further configured to determine panel type information based on the panel image information]”) [Examiner’s remarks: The panel type information is determined (type of component, text or image component, component specific to a domain) based on the panel image information.];
- based on the panel type information, search whether there is a panel that corresponds to the panel type information in a preset panel library memory (Paragraph [0087], “For example, for each known GUI component, information 214 may include a template GUI component (i.e., an example GUI component) that may be used to perform template matching with detected features of the image. GUI component information 214 may additionally include known phrases and known icons, which may also be used to identify text buttons and image buttons that are commonly found in similar GUIs. In some embodiments, GUI component information 214 may be organized into multiple dictionaries, where each dictionary corresponds to a a particular domain (i.e., domain specific dictionaries). For example, the information 214 may comprise a dictionary specific to the database domain (e.g., icons and phrases commonly occurring in database-related GUIs), another dictionary specific to the mobile application domain (e.g., icons and phrases commonly occurring in mobile application GUIs), another dictionary specific to the email domain (e.g., icons and phrases commonly occurring in email-related applications), and so on. The first dictionary may include phrases and icons that are commonly found in database application GUIs while the second dictionary may include phrases and icons that are commonly found in mobile application GUIs [based on the panel type information, search whether there is a panel that corresponds to the panel type information in a preset panel library memory]”); and
- according to a search result, determine whether there is a panel that matches the panel image information in the preset panel library memory (Paragraph [0087], “For example, for each known GUI component, information 214 may include a template GUI component (i.e., an example GUI component) that may be used to perform template matching with detected features of the image. GUI component information 214 may additionally include known phrases and known icons, which may also be used to identify text buttons and image buttons that are commonly found in similar GUIs. In some embodiments, GUI component information 214 may be organized into multiple dictionaries, where each dictionary corresponds to a a particular domain (i.e., domain specific dictionaries). For example, the information 214 may comprise a dictionary specific to the database domain (e.g., icons and phrases commonly occurring in database-related GUIs), another dictionary specific to the mobile application domain (e.g., icons and phrases commonly occurring in mobile application GUIs), another dictionary specific to the email domain (e.g., icons and phrases commonly occurring in email-related applications), and so on. The first dictionary may include phrases and icons that are commonly found in database application GUIs while the second dictionary may include phrases and icons that are commonly found in mobile application GUIs [according to a search result, determine whether there is a panel that matches the panel image information in the preset panel library memory]”); and
- wherein the micro-program center processor is further configured to determine whether the panel type information is a text panel type (Paragraph [0096], “For example, a blob detection technique and image recognition may be used to detect icons that may correspond to image-based GUI components (e.g., an image buttons, radio buttons, switch buttons), while a character recognition technique may be used to detect regions of text that may correspond to text-based GUI components (e.g., a text buttons) [wherein the micro-program center processor is further configured to determine whether the panel type information is a text panel type]”);
- if the panel type information is a text panel type, extract text information from the panel image information and insert the text information into a preset blank text panel to generate the target panel (Paragraph [0096], “For example, a blob detection technique and image recognition may be used to detect icons that may correspond to image-based GUI components (e.g., an image buttons, radio buttons, switch buttons), while a character recognition technique may be used to detect regions of text that may correspond to text-based GUI components (e.g., a text buttons). In doing so, feature detector subsystem 310 may consult the set of rules associated with the partition for performing the processing. Upon the completion of the detection process, each of the detected features may be passed to component identifier subsystem 312 for further processing”; Paragraph [0164], “After no more GUI components can be identified from the detected features, the component identifier may designate the remaining features as static text or images”; Paragraph [0007], “For example, the application requirements information for an application may include one or more images (e.g., photos of mock-ups) of GUIs for the application. In certain embodiments, automated techniques are disclosed for generating the GUIs based upon these images [if the panel type information is a text panel type, extract text information from the panel image information and insert the text information into a preset blank text panel to generate the target panel]”) [Examiner’s remarks: When the text panel type is detected, the text is input into the component during the generation of code from the GUI image.]; and
- if the panel type information is not a text panel type, insert the panel image information into a preset blank picture panel to generate the target panel (Paragraph [0096], “For example, a blob detection technique and image recognition may be used to detect icons that may correspond to image-based GUI components (e.g., an image buttons, radio buttons, switch buttons), while a character recognition technique may be used to detect regions of text that may correspond to text-based GUI components (e.g., a text buttons). In doing so, feature detector subsystem 310 may consult the set of rules associated with the partition for performing the processing. Upon the completion of the detection process, each of the detected features may be passed to component identifier subsystem 312 for further processing”; Paragraph [0164], “After no more GUI components can be identified from the detected features, the component identifier may designate the remaining features as static text or images”; Paragraph [0007], “For example, the application requirements information for an application may include one or more images (e.g., photos of mock-ups) of GUIs for the application. In certain embodiments, automated techniques are disclosed for generating the GUIs based upon these images [if the panel type information is not a text panel type, insert the panel image information into a preset blank picture panel to generate the target panel]”) [Examiner’s remarks: If the component is not a text type (an image type), the image is input when the GUI is generated from the design images.].
Dayanandan does not explicitly disclose a visualized configuration system processor. However, Prichard discloses:
- A visualized configuration system processor (Fig. 8, Paragraph [0033], “A screen shot of a representative dialog, having a left pane for navigation and a right pane for editing, is shown in FIG. 8. When the user makes a selection in the left pane of the dialog, DHTML invokes a script which recreates the right pane based on the selection (indicated by the arrow labeled Navigate in FIG. 1). When the user changes a value in a display field in the right pane using DHTML (thereby making it unnecessary to load a new HTML-based display text file in the web browser), the DHTML software for the right pane invokes a script which changes the value stored by the user interface generation software 14. This editing operation is indicated by the arrow labeled Edit in FIG. 1. Later, when the user enters a Save command, the application software queries the user interface generation software 14 and writes the changes back to the application (system) data via the interface 22. This store operation is indicated by the arrow labeled Save in FIG. 1 [the micro-program center is configured to display the target panel through he visualized configuration system]”).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Prichard into the teachings of Dayanandan to include “a visualized configuration system processor”. As stated in Prichard, “Thus there is a need for a technique whereby graphical user interfaces and screen layouts can be easily customized with significant reductions in time, effort and cost” (Paragraph [0006]). Allowing users to easily customize information to be input to create a GUI requires less work from experienced developers to develop similar applications from scratch. Allowing customizations through a GUI makes the edits more intuitive for those not well versed in GUI creation. Therefore, it would be obvious to one or ordinary skill in the art to combine automating GUI generation and customization of the GUI through a user interface.
Regarding claim 2, the rejection of claim 1 is incorporated; and Dayanandan further discloses:
a component center memory configured to store a component (Paragraph [0087], “For example, for each known GUI component, information 214 may include a template GUI component (i.e., an example GUI component) that may be used to perform template matching with detected features of the image. GUI component information 214 may additionally include known phrases and known icons, which may also be used to identify text buttons and image buttons that are commonly found in similar GUIs. In some embodiments, GUI component information 214 may be organized into multiple dictionaries, where each dictionary corresponds to a a particular domain (i.e., domain specific dictionaries) [a component center configured to store a component]”);
Dayanandan does not explicitly disclose:
- the micro-program center processor is configured to display the target panel through the visualized configuration system processor; and
- perform a configuration operation on the target panel in response to a user operation to generate the micro-program, wherein the configuration operation comprises at least one of display parameter configuration, data bonding, event bonding, and component bonding.
However, Prichard discloses:
- the micro-program center processor is configured to display the target panel through the visualized configuration system processor (Fig. 8, Paragraph [0033], “A screen shot of a representative dialog, having a left pane for navigation and a right pane for editing, is shown in FIG. 8. When the user makes a selection in the left pane of the dialog, DHTML invokes a script which recreates the right pane based on the selection (indicated by the arrow labeled Navigate in FIG. 1). When the user changes a value in a display field in the right pane using DHTML (thereby making it unnecessary to load a new HTML-based display text file in the web browser), the DHTML software for the right pane invokes a script which changes the value stored by the user interface generation software 14. This editing operation is indicated by the arrow labeled Edit in FIG. 1. Later, when the user enters a Save command, the application software queries the user interface generation software 14 and writes the changes back to the application (system) data via the interface 22. This store operation is indicated by the arrow labeled Save in FIG. 1 [the micro-program center processor is configured to display the target panel through the visualized configuration system processor]”); and
- perform a configuration operation on the target panel in response to a user operation to generate the micro-program, wherein the configuration operation comprises at least one of display parameter configuration, data bonding, event bonding, and component bonding (Fig. 8, Paragraph [0033], “A screen shot of a representative dialog, having a left pane for navigation and a right pane for editing, is shown in FIG. 8. When the user makes a selection in the left pane of the dialog, DHTML invokes a script which recreates the right pane based on the selection (indicated by the arrow labeled Navigate in FIG. 1). When the user changes a value in a display field in the right pane using DHTML (thereby making it unnecessary to load a new HTML-based display text file in the web browser), the DHTML software for the right pane invokes a script which changes the value stored by the user interface generation software 14. This editing operation is indicated by the arrow labeled Edit in FIG. 1. Later, when the user enters a Save command, the application software queries the user interface generation software 14 and writes the changes back to the application (system) data via the interface 22. This store operation is indicated by the arrow labeled Save in FIG. 1 [perform a configuration operation on the target panel in response to a user operation to generate the micro-program, wherein the configuration operation comprises at least one of display parameter configuration, data bonding, event bonding, and component bonding]”) [Examiner’s remarks: In response to the user operation (making a selection, saving), a configuration operation is performed (writing and changing back the application data).].
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Prichard into the teachings of Dayanandan to include “the micro-program center processor is configured to display the target panel through the visualized configuration system processor” and “perform a configuration operation on the target panel in response to a user operation to generate the micro-program, wherein the configuration operation comprises at least one of display parameter configuration, data bonding, event bonding, and component bonding”. As stated in Prichard, “Thus there is a need for a technique whereby graphical user interfaces and screen layouts can be easily customized with significant reductions in time, effort and cost” (Paragraph [0006]). Allowing users to easily customize information to be input to create a GUI requires less work from experienced developers to develop similar applications from scratch. Allowing customizations through a GUI makes the edits more intuitive for those not well versed in GUI creation. Therefore, it would be obvious to one or ordinary skill in the art to combine automating GUI generation and customization of the GUI through a user interface.
Regarding claim 6, the rejection of claim 1 is incorporated; and Dayanandan further discloses:
a product center processor configured to package the micro-program to generate a cross-platform application (Paragraph [0071], “In certain embodiment, code generators 126, 128, 130 each be configured to generate code using a specific language and for a particular platform (e.g., Windows, Android, IOS platforms). Accordingly, GUI implementations 110, 112, and 114 generated by the code generators may be in different programming languages and/or for different programming platforms. In this manner, GUI model 124 provides a single common input that can be used to generate different GUI implementations 110, 112, and 114 [a product center processor configured to package the micro-program to generate a cross-platform application]”);
wherein when the cross-platform application is run in different operating systems, a page comprising the target panel that is suitable for the different operating systems is generated (Paragraph [0071], “In certain embodiment, code generators 126, 128, 130 each be configured to generate code using a specific language and for a particular platform (e.g., Windows, Android, IOS platforms). Accordingly, GUI implementations 110, 112, and 114 generated by the code generators may be in different programming languages and/or for different programming platforms. In this manner, GUI model 124 provides a single common input that can be used to generate different GUI implementations 110, 112, and 114 [wherein when the cross-platform application is run in different operating systems, a page comprising the target panel that is suitable for the different operating systems is generated]”).
Regarding claim 7, Dayanandan discloses:
A computer device, comprising: a processor and a memory storing a computer program that is runnable on the processor, wherein when the processor executes the program, a micro-program generation method is implemented, the method is applied to an application development platform, wherein the method comprises (Fig. 24):
obtaining, by the processor, a page design image comprising panel image information (Paragraph [0066], “Model generator subsystem 120 is then configured to generate a GUI model 124 that captures the information determined by model generator subsystem 120 from the analysis of image 104. The information determined by model generator subsystem 120 and represented in GUI model 124 may include information related to the look-and-feel of the GUI screen. In certain embodiments, determining the look-and-feel information for a GUI screen may include partitioning the image into one or more partitions, determining a set of GUI components (e.g., buttons, drop down lists, segments, etc.) that are included in each of the partitions and their attributes (e.g., labels, sizes) [obtaining, by the processor, a page design image comprising panel image information]”);
determining, by the processor, whether there is a panel that matches the panel image information in a panel library memory (Paragraph [0152], “In general, a model generator may store templates of GUI components in the memory and match these templates to detected features. The stored templates may account for different styles. For example, a model generator may store a switch button template for matching switch buttons on the Android platform and another switch button template for matching switch buttons on the iOS platform. The stored templates may account for different component states”; Paragraph [0164], “After no more GUI components can be identified from the detected features, the component identifier may designate the remaining features as static text or images [determining, by the processor, whether there is a panel that matches the panel image information in a panel library memory]”);
if there is a panel that matches the panel image information in the panel library memory, acquiring, by the processor, the panel that matches the panel image information as a target panel (Paragraph [0151], “Thus, a component identifier may attempt to identify GUI components with visual indicators before other types of GUI components (e.g., text buttons, image buttons, edit texts). After identifying GUI components with visual indicators, the partition analyzer may identify other types of GUI components”; Paragraph [0152], “Thus, the model generator may store a switch button template for each of the states of the switch button. The component identifier may identify a switch button from a feature based on whether the feature matches the switch button template”; Paragraph [0164], “After no more GUI components can be identified from the detected features, the component identifier may designate the remaining features as static text or images.” [if there is a panel that matches the panel image information in the panel library memory, acquiring, by the processor, the panel that matches the panel image information as a target panel]) [Examiner’s remarks: The panel (component) that matches the panel image information from a saved library (templates) is selected as the target panel for later code generation.]; and
generating, by the processor, a micro-program based on the target panel … (Paragraph [0077], “In certain embodiments, GUI implementations 110, 112, and 114 may each correspond to a code template that implements at least a portion of the application's GUI related functionality. The code template may be made up of one or more source code files containing high-level code (which may comprise methods, functions, classes, event handlers, etc.) that can be compiled or interpreted to generate an executable that can be executed by one or more processors of a computer system”; Paragraph [0078], “GUI implementations 110, 112, and 114 may each be based on information specified in GUI model 124. For example, if the GUI model specifies that a GUI window or screen comprises a particular set of GUI components, the source code that is generated for the GUI window may include code logic for instantiating a GUI screen including each of the GUI components [generating, by the processor, a micro-program based on the target panel …]”); and
if there is no panel that matches the panel image information in the panel library memory, acquiring, by the processor, a preset blank panel from the panel library memory, the preset blank panel comprising: at least one of a preset blank text panel and a preset blank picture panel (Paragraph [0087], “For example, known information 214 may include information identifying various GUI components (e.g., buttons, text boxes, drop-down lists) and their associated characteristics. For example, for each known GUI component, information 214 may include a template GUI component (i.e., an example GUI component) that may be used to perform template matching with detected features of the image. GUI component information 214 may additionally include known phrases and known icons, which may also be used to identify text buttons and image buttons that are commonly found in similar GUIs. In some embodiments, GUI component information 214 may be organized into multiple dictionaries, where each dictionary corresponds to a a particular domain (i.e., domain specific dictionaries)”; Paragraph [0164], “After no more GUI components can be identified from the detected features, the component identifier may designate the remaining features as static text or images [if there is no panel that matches the panel image information in the panel library memory, acquiring, by the processor, a preset blank panel from the panel library memory, the preset blank panel comprising: at least one of a preset blank text panel and a preset blank picture panel]”) [Examiner’s remarks: The features are matched with GUI components stored in dictionaries, and when the components do not match a stored component, they are assigned either static text or image (preset blank text panel and picture panel).]; and
generating, by the processor, the target panel based on the panel image information and the preset blank panel (Paragraph [0077], “In certain embodiments, GUI implementations 110, 112, and 114 may each correspond to a code template that implements at least a portion of the application's GUI related functionality. The code template may be made up of one or more source code files containing high-level code (which may comprise methods, functions, classes, event handlers, etc.) that can be compiled or interpreted to generate an executable that can be executed by one or more processors of a computer system”; Paragraph [0078], “GUI implementations 110, 112, and 114 may each be based on information specified in GUI model 124. For example, if the GUI model specifies that a GUI window or screen comprises a particular set of GUI components, the source code that is generated for the GUI window may include code logic for instantiating a GUI screen including each of the GUI components [generating, by the processor, the target panel based on the panel image information and the preset blank panel]”) [Examiner’s remarks: The target panel (GUI) is generated based on the assigned components (blank text or image component).],
wherein the determining, by the processor, whether there is the panel that matches the panel image information in the panel library memory comprises (Paragraph [0152], “In general, a model generator may store templates of GUI components in the memory and match these templates to detected features. The stored templates may account for different styles. For example, a model generator may store a switch button template for matching switch buttons on the Android platform and another switch button template for matching switch buttons on the iOS platform. The stored templates may account for different component states”; Paragraph [0164], “After no more GUI components can be identified from the detected features, the component identifier may designate the remaining features as static text or images [determining, by the processor, whether there is a panel that matches the panel image information in a panel library memory]”):
- determining, by the processor, panel type information based on the panel image information (Paragraph [0087], “For example, for each known GUI component, information 214 may include a template GUI component (i.e., an example GUI component) that may be used to perform template matching with detected features of the image. GUI component information 214 may additionally include known phrases and known icons, which may also be used to identify text buttons and image buttons that are commonly found in similar GUIs. In some embodiments, GUI component information 214 may be organized into multiple dictionaries, where each dictionary corresponds to a a particular domain (i.e., domain specific dictionaries). For example, the information 214 may comprise a dictionary specific to the database domain (e.g., icons and phrases commonly occurring in database-related GUIs), another dictionary specific to the mobile application domain (e.g., icons and phrases commonly occurring in mobile application GUIs), another dictionary specific to the email domain (e.g., icons and phrases commonly occurring in email-related applications), and so on. The first dictionary may include phrases and icons that are commonly found in database application GUIs while the second dictionary may include phrases and icons that are commonly found in mobile application GUIs [determining, by the processor, panel type information based on the panel image information]”) [Examiner’s remarks: The panel type information is determined (type of component, text or image component, component specific to a domain) based on the panel image information.];
- searching, by the processor based on the panel type information, whether there is a panel that corresponds to the panel type information in a preset panel library memory (Paragraph [0087], “For example, for each known GUI component, information 214 may include a template GUI component (i.e., an example GUI component) that may be used to perform template matching with detected features of the image. GUI component information 214 may additionally include known phrases and known icons, which may also be used to identify text buttons and image buttons that are commonly found in similar GUIs. In some embodiments, GUI component information 214 may be organized into multiple dictionaries, where each dictionary corresponds to a a particular domain (i.e., domain specific dictionaries). For example, the information 214 may comprise a dictionary specific to the database domain (e.g., icons and phrases commonly occurring in database-related GUIs), another dictionary specific to the mobile application domain (e.g., icons and phrases commonly occurring in mobile application GUIs), another dictionary specific to the email domain (e.g., icons and phrases commonly occurring in email-related applications), and so on. The first dictionary may include phrases and icons that are commonly found in database application GUIs while the second dictionary may include phrases and icons that are commonly found in mobile application GUIs [searching, by the processor based on the panel type information, whether there is a panel that corresponds to the panel type information in a preset panel library memory]”); and
- according to a search result, determining, by the processor, whether there is a panel that matches the panel image information in the preset panel library memory (Paragraph [0087], “For example, for each known GUI component, information 214 may include a template GUI component (i.e., an example GUI component) that may be used to perform template matching with detected features of the image. GUI component information 214 may additionally include known phrases and known icons, which may also be used to identify text buttons and image buttons that are commonly found in similar GUIs. In some embodiments, GUI component information 214 may be organized into multiple dictionaries, where each dictionary corresponds to a a particular domain (i.e., domain specific dictionaries). For example, the information 214 may comprise a dictionary specific to the database domain (e.g., icons and phrases commonly occurring in database-related GUIs), another dictionary specific to the mobile application domain (e.g., icons and phrases commonly occurring in mobile application GUIs), another dictionary specific to the email domain (e.g., icons and phrases commonly occurring in email-related applications), and so on. The first dictionary may include phrases and icons that are commonly found in database application GUIs while the second dictionary may include phrases and icons that are commonly found in mobile application GUIs [according to a search result, determining, by the processor, whether there is a panel that matches the panel image information in the preset panel library memory]”),
wherein the generating, by the processor, the target panel based on the panel image information and the preset blank panel comprises (Paragraph [0077], “In certain embodiments, GUI implementations 110, 112, and 114 may each correspond to a code template that implements at least a portion of the application's GUI related functionality. The code template may be made up of one or more source code files containing high-level code (which may comprise methods, functions, classes, event handlers, etc.) that can be compiled or interpreted to generate an executable that can be executed by one or more processors of a computer system”; Paragraph [0078], “GUI implementations 110, 112, and 114 may each be based on information specified in GUI model 124. For example, if the GUI model specifies that a GUI window or screen comprises a particular set of GUI components, the source code that is generated for the GUI window may include code logic for instantiating a GUI screen including each of the GUI components [wherein the generating, by the processor, the target panel based on the panel image information and the preset blank panel comprises]”):
- determining, by the processor, whether the panel type information is a text panel type (Paragraph [0096], “For example, a blob detection technique and image recognition may be used to detect icons that may correspond to image-based GUI components (e.g., an image buttons, radio buttons, switch buttons), while a character recognition technique may be used to detect regions of text that may correspond to text-based GUI components (e.g., a text buttons) [determining, by the processor, whether the panel type information is a text panel type]”);
- if the panel type information is a text panel type, extracting, by the processor, text information from the panel image information and inserting the text information into a preset blank text panel to generate the target panel (Paragraph [0096], “For example, a blob detection technique and image recognition may be used to detect icons that may correspond to image-based GUI components (e.g., an image buttons, radio buttons, switch buttons), while a character recognition technique may be used to detect regions of text that may correspond to text-based GUI components (e.g., a text buttons). In doing so, feature detector subsystem 310 may consult the set of rules associated with the partition for performing the processing. Upon the completion of the detection process, each of the detected features may be passed to component identifier subsystem 312 for further processing”; Paragraph [0164], “After no more GUI components can be identified from the detected features, the component identifier may designate the remaining features as static text or images”; Paragraph [0007], “For example, the application requirements information for an application may include one or more images (e.g., photos of mock-ups) of GUIs for the application. In certain embodiments, automated techniques are disclosed for generating the GUIs based upon these images [if the panel type information is a text panel type, extracting, by the processor, text information from the panel image information and inserting the text information into a preset blank text panel to generate the target panel]”) [Examiner’s remarks: When the text panel type is detected, the text is input into the component during the generation of code from the GUI image.]; and
- if the panel type information is not a text panel type, inserting, by the processor, the panel image information into a preset blank picture panel to generate the target panel (Paragraph [0096], “For example, a blob detection technique and image recognition may be used to detect icons that may correspond to image-based GUI components (e.g., an image buttons, radio buttons, switch buttons), while a character recognition technique may be used to detect regions of text that may correspond to text-based GUI components (e.g., a text buttons). In doing so, feature detector subsystem 310 may consult the set of rules associated with the partition for performing the processing. Upon the completion of the detection process, each of the detected features may be passed to component identifier subsystem 312 for further processing”; Paragraph [0164], “After no more GUI components can be identified from the detected features, the component identifier may designate the remaining features as static text or images”; Paragraph [0007], “For example, the application requirements information for an application may include one or more images (e.g., photos of mock-ups) of GUIs for the application. In certain embodiments, automated techniques are disclosed for generating the GUIs based upon these images [if the panel type information is not a text panel type, inserting, by the processor, the panel image information into a preset blank picture panel to generate the target panel]”) [Examiner’s remarks: If the component is not a text type (an image type), the image is input when the GUI is generated from the design images.].
Dayanandan does not explicitly disclose a visualized configuration system. However, Prichard discloses:
- through a visualized configuration system (Fig. 8, Paragraph [0033], “A screen shot of a representative dialog, having a left pane for navigation and a right pane for editing, is shown in FIG. 8. When the user makes a selection in the left pane of the dialog, DHTML invokes a script which recreates the right pane based on the selection (indicated by the arrow labeled Navigate in FIG. 1). When the user changes a value in a display field in the right pane using DHTML (thereby making it unnecessary to load a new HTML-based display text file in the web browser), the DHTML software for the right pane invokes a script which changes the value stored by the user interface generation software 14. This editing operation is indicated by the arrow labeled Edit in FIG. 1. Later, when the user enters a Save command, the application software queries the user interface generation software 14 and writes the changes back to the application (system) data via the interface 22. This store operation is indicated by the arrow labeled Save in FIG. 1 [the micro-program center is configured to display the target panel through he visualized configuration system]”).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Prichard into the teachings of Dayanandan to include “through a visualized configuration system”. As stated in Prichard, “Thus there is a need for a technique whereby graphical user interfaces and screen layouts can be easily customized with significant reductions in time, effort and cost” (Paragraph [0006]). Allowing users to easily customize information to be input to create a GUI requires less work from experienced developers to develop similar applications from scratch. Allowing customizations through a GUI makes the edits more intuitive for those not well versed in GUI creation. Therefore, it would be obvious to one or ordinary skill in the art to combine automating GUI generation and customization of the GUI through a user interface.
Regarding claim 8, the rejection of claim 7 is incorporated; and Dayanandan further discloses:
wherein the generating, by the processor, the micro-program based on the target panel (Paragraph [0077], “In certain embodiments, GUI implementations 110, 112, and 114 may each correspond to a code template that implements at least a portion of the application's GUI related functionality. The code template may be made up of one or more source code files containing high-level code (which may comprise methods, functions, classes, event handlers, etc.) that can be compiled or interpreted to generate an executable that can be executed by one or more processors of a computer system”; Paragraph [0078], “GUI implementations 110, 112, and 114 may each be based on information specified in GUI model 124. For example, if the GUI model specifies that a GUI window or screen comprises a particular set of GUI components, the source code that is generated for the GUI window may include code logic for instantiating a GUI screen including each of the GUI components [wherein the generating, by the processor, the micro-program based on the target panel…]”)…comprises:
Dayanandan does not explicitly disclose:
… through the visualized configuration system processor…
displaying, by the processor, the target panel through the visualized configuration system processor; and
in response to a user operation, performing, by the processor, a configuration operation on the target panel through the visualized configuration system processor to generate the micro-program, wherein the configuration operation comprises at least one of display parameter configuration, data bonding, event bonding, and component bonding.
However, Prichard discloses:
… through the visualized configuration system processor (Fig. 8, Paragraph [0033], “A screen shot of a representative dialog, having a left pane for navigation and a right pane for editing, is shown in FIG. 8. When the user makes a selection in the left pane of the dialog, DHTML invokes a script which recreates the right pane based on the selection (indicated by the arrow labeled Navigate in FIG. 1). When the user changes a value in a display field in the right pane using DHTML (thereby making it unnecessary to load a new HTML-based display text file in the web browser), the DHTML software for the right pane invokes a script which changes the value stored by the user interface generation software 14. This editing operation is indicated by the arrow labeled Edit in FIG. 1. Later, when the user enters a Save command, the application software queries the user interface generation software 14 and writes the changes back to the application (system) data via the interface 22. This store operation is indicated by the arrow labeled Save in FIG. 1 [through the visualized configuration system processor]”)…
displaying, by the processor, the target panel through the visualized configuration system processor (Fig. 8, Paragraph [0033], “A screen shot of a representative dialog, having a left pane for navigation and a right pane for editing, is shown in FIG. 8. When the user makes a selection in the left pane of the dialog, DHTML invokes a script which recreates the right pane based on the selection (indicated by the arrow labeled Navigate in FIG. 1). When the user changes a value in a display field in the right pane using DHTML (thereby making it unnecessary to load a new HTML-based display text file in the web browser), the DHTML software for the right pane invokes a script which changes the value stored by the user interface generation software 14. This editing operation is indicated by the arrow labeled Edit in FIG. 1. Later, when the user enters a Save command, the application software queries the user interface generation software 14 and writes the changes back to the application (system) data via the interface 22. This store operation is indicated by the arrow labeled Save in FIG. 1 [displaying, by the processor, the target panel through the visualized configuration system processor]”); and
in response to a user operation, performing, by the processor, a configuration operation on the target panel through the visualized configuration system processor to generate the micro-program, wherein the configuration operation comprises at least one of display parameter configuration, data bonding, event bonding, and component bonding (Fig. 8, Paragraph [0033], “A screen shot of a representative dialog, having a left pane for navigation and a right pane for editing, is shown in FIG. 8. When the user makes a selection in the left pane of the dialog, DHTML invokes a script which recreates the right pane based on the selection (indicated by the arrow labeled Navigate in FIG. 1). When the user changes a value in a display field in the right pane using DHTML (thereby making it unnecessary to load a new HTML-based display text file in the web browser), the DHTML software for the right pane invokes a script which changes the value stored by the user interface generation software 14. This editing operation is indicated by the arrow labeled Edit in FIG. 1. Later, when the user enters a Save command, the application software queries the user interface generation software 14 and writes the changes back to the application (system) data via the interface 22. This store operation is indicated by the arrow labeled Save in FIG. 1 [in response to a user operation, performing, by the processor, a configuration operation on the target panel through the visualized configuration system processor to generate the micro-program, wherein the configuration operation comprises at least one of display parameter configuration, data bonding, event bonding, and component bonding]”) [Examiner’s remarks: In response to the user operation (making a selection, saving), a configuration operation is performed (writing and changing back the application data).].
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Prichard into the teachings of Dayanandan to include “… through the visualized configuration system processor…”, “displaying, by the processor, the target panel through the visualized configuration system processor”, and “in response to a user operation, performing, by the processor, a configuration operation on the target panel through the visualized configuration system processor to generate the micro-program, wherein the configuration operation comprises at least one of display parameter configuration, data bonding, event bonding, and component bonding”. As stated in Prichard, “Thus there is a need for a technique whereby graphical user interfaces and screen layouts can be easily customized with significant reductions in time, effort and cost” (Paragraph [0006]). Allowing users to easily customize information to be input to create a GUI requires less work from experienced developers to develop similar applications from scratch. Allowing customizations through a GUI makes the edits more intuitive for those not well versed in GUI creation. Therefore, it would be obvious to one or ordinary skill in the art to combine automating GUI generation and customization of the GUI through a user interface.
Regarding claim 13, the rejection of claim 7 is incorporated; and Dayanandan further discloses:
the determining, by the processor, the panel type information based on the panel image information comprises (Fig. 24; Paragraph [0087], “For example, for each known GUI component, information 214 may include a template GUI component (i.e., an example GUI component) that may be used to perform template matching with detected features of the image. GUI component information 214 may additionally include known phrases and known icons, which may also be used to identify text buttons and image buttons that are commonly found in similar GUIs. In some embodiments, GUI component information 214 may be organized into multiple dictionaries, where each dictionary corresponds to a a particular domain (i.e., domain specific dictionaries). For example, the information 214 may comprise a dictionary specific to the database domain (e.g., icons and phrases commonly occurring in database-related GUIs), another dictionary specific to the mobile application domain (e.g., icons and phrases commonly occurring in mobile application GUIs), another dictionary specific to the email domain (e.g., icons and phrases commonly occurring in email-related applications), and so on. The first dictionary may include phrases and icons that are commonly found in database application GUIs while the second dictionary may include phrases and icons that are commonly found in mobile application GUIs [the determining the panel type information based on the panel image information comprises]”):
performing, by the processor, image segmentation on the page design image to obtain the panel image information (Fig. 24; Paragraph [0017], “If the area happens to encompass one or more GUI components that are determined from the image, the segment is considered to contain those GUI components. If the area happens to encompass one or more smaller closed contours that are detected in the image, the segment is considered to contain the smaller segments that correspond to those smaller closed contours”; Paragraph [0019], “In certain embodiments, the segmentation of a partition may be directed by the set of rules associated with the partition. In particular, the set of rules may cause one or more detection techniques to be used in scanning the pixels of the rectangular area that corresponds to the segment. For example, an edge detection operator may be used to detect edges of closed contours that are located within the partition. Next, one or more outermost closed contours may be determined from the detected edges and the borders of the segment [performing image segmentation on the page design image to obtain the panel image information]”) [Examiner’s remarks: Image segmentation is used to partition the page design image to determine panel image information (components in the partition).]; and
performing, by the processor, image recognition on the panel image information to obtain the panel type information (Fig. 24; Paragraph [0096], “For example, a blob detection technique and image recognition may be used to detect icons that may correspond to image-based GUI components (e.g., an image buttons, radio buttons, switch buttons), while a character recognition technique may be used to detect regions of text that may correspond to text-based GUI components (e.g., a text buttons) [performing image recognition on the panel image information to obtain the panel type information]”) [Examiner’s remarks: Image recognition techniques are used to determine the type of component (e.g. image button) or that the component is text.].
Regarding claim 15, the rejection of claim 7 is incorporated; and Dayanandan further discloses:
A non-transitory computer-readable storage medium, comprising a stored program, wherein acts of the micro-program generation method according to claim 7 are implemented when the program is run (Paragraph [0212], “FIG. 24 illustrates an exemplary computer system 2400 that may be used to implement certain embodiments. In some embodiments, computer system 2400 may be used to implement any of the various servers and computer systems described above. As shown in FIG. 24, computer system 2400 includes various subsystems including a processing unit 2404 that communicates with a number of peripheral subsystems via a bus subsystem 2402. These peripheral subsystems may include a processing acceleration unit 2406, an I/O subsystem 2408, a storage subsystem 2418 and a communications subsystem 2424. Storage subsystem 2418 may include tangible computer-readable storage media 2422 and a system memory 2410 [A non-transitory computer-readable storage medium, comprising a stored program, wherein acts of the micro-program generation method according to claim 7 are implemented when the program is run]”).
Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over US 20180203571 A1 (hereinafter “Dayanandan”) in view of US 20030025732 A1 (hereinafter “Prichard”), further in view of “Performance Evaluation of Java, JavaScript and PHP Serialization Libraries for XML, JSON and Binary formats” by Jan Vanura and Pavel Kriz (hereinafter “Vanura”).
Regarding claim 9, the combination of Dayanandan and Prichard does not explicitly disclose:
wherein a data format of the micro-program is a JavaScript Object Notation (JSON) data format based on Domain Specific Language (DSL) description.
However, Vanura discloses:
wherein a data format of the micro-program is a JavaScript Object Notation (JSON) data format based on Domain Specific Language (DSL) description (Abstract, “The aim of this paper is to compare the formats and libraries used for serialization and deserialization of data, typically with RESTful web services, in terms of the processing time and size of the output data. The formats tested include XML, JSON, MessagePack, Avro, Protocol Buffers, and native serialization of each of the tested programming languages. Serialization and deserialization is tested in PHP, Java and JavaScript using 49 different official and third party libraries”; Fig. 1, 2, and 3 include a bar measuring the performance of dsl-json.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Vanura into the combined teachings of Dayanandan and Prichard to include “wherein a data format of the micro-program is a JavaScript Object Notation (JSON) data format based on Domain Specific Language (DSL) description”. As stated in Vanura, “In order to exchange the data between the client and the server (a Web service), it is necessary for the data to be transmitted in a format that is understood by both sides” (Page 166-167). Especially for smaller applications with minimal data, it is important for applications to be able to communicate, and having a consistent data format allows for this. DSL-JSON is a known data format which allows for serialization of data. Therefore, it would be obvious to one or ordinary skill in the art to combine automating creation of micro-programs with DSL-JSON data format.
Response to Arguments
Applicant's arguments filed November 18, 2025 have been fully considered but they are not persuasive.
Regarding the 35 U.S.C. 101 rejection, Applicant argues:
Applicant amends claims 1-2, 6-9, 13 and 15 to have sufficient structures and to be implemented by the processor of the computer device.
Therefore amended claims 1-2, 6-9, 13 and 15 can be distinct from a mental process and not an abstract idea. Claims 3-5, 10-12, 14 and 16-20 are canceled.
Accordingly, Applicant respectfully requests that the rejections be reconsidered and withdrawn.
[See Remarks – Page 6]
Examiner’s response:
Examiner respectfully disagrees. Whether a claim has sufficient structure is not in consideration for a rejection of the claims under 35 U.S.C. 101 for abstract ideas. The additional elements added merely apply a generic computer/computer component to the abstract idea, which does not amount to practical application under Prong 2, nor to significantly more under Step 2B. Therefore, the rejection under 35 U.S.C. 101 is maintained.
Regarding the rejection under 35 U.S.C. 103, Applicant argues:
Paragraph 0164 of Dayanandan recites:
"[0164] After no more GUI components can be identified from the detected features, the component identifier may designate the remaining features as static text or images."
From above it can be seen that Dayanandan does not disclose or give any hint about a "blank panel", "black text panel" and "blank picture panel", therefore does not disclose or teach the features related to black panel, and does not disclose features mentioned above.
[See Remarks – Page 8]
Examiner’s Response:
Examiner respectfully disagrees. Dayanandan discloses static text and static images, which are used after it is determined that the presented components do not fit any other components. Static text and static images are considered to be part of the blank panel as either a blank text panel or blank picture panel when other potential templates have been exhausted and what remains is either a text or image.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to VIVIAN WEIJIA DUAN whose telephone number is (703)756-5442. The examiner can normally be reached Monday-Friday 8:30AM-5PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Wei Y Mui can be reached at (571) 272-3708. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/V.W.D./Examiner, Art Unit 2191 /WEI Y MUI/Supervisory Patent Examiner, Art Unit 2191