DETAILED ACTION
This office action is responsive to applicant’s communication filed 12/22/2025.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Application is acknowledged as a National Stage application of PCT PCT/CN2021/142129. Priority to PCT/CN2021/142129 with a priority date of 12/28/2021 is acknowledged under 35 USC 119(e) and 37 CFR 1.78.
Response to Arguments
Applicant’s arguments, see pg. 14, filed 12/22/2025, with respect to the objection to claims 103, 107, and 114 have been fully considered and are persuasive. The objection to claims 103, 107, and 114 have been withdrawn.
Applicant’s arguments, see pg. 14, filed 12/22/2025, with respect to the rejection of claims 113 and 114 under 35 U.S.C. 112 have been fully considered and are persuasive. The rejection of claims 113 and 114 under 35 U.S.C. 112 has been withdrawn.
Regarding applicant’s remarks, filed 12/22/2025, with respect to the rejection of independent claims 90, 115, and 116 under 35 U.S.C. 102, as well as 35 U.S.C. 103 including amended limitations from the dependent claims, two distinct arguments have been proposed.
Firstly, applicant argues that the hand-painted trajectory of Chan does not constitute a “graph drawing command”. This argument has been fully considered, but it is not persuasive. Under the broadest reasonable interpretation of the claim language, any input by a user which causes a graph to be displayed may be considered a “graph drawing command”. In this case, the user’s hand-painted trajectory is also a graph drawing command.
Secondly, applicant argues that the amended limitation “wherein the writing prompt table is configured to prompt the user to write in various cells of the writing prompt table” is not disclosed or suggested by the existing references. This argument has been fully considered. Examiner holds that the presence of an empty table cell which is clearly intended to accept user input could potentially itself be considered a prompt to the user. However, for the sake of clarity, the existing rejection has been withdrawn, and a new ground of rejection is made in view of W3Schools ("HTML <input> placeholder Attribute").
Claim Interpretation
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification.
The following terms in the claims have been given the following interpretations in light of the specification:
“Intelligent graphing mode”: [00260] “Alternatively, the intelligent graphing mode in the embodiments indicates a smart graphing function in other application, such as enabling a graphing function or opening an interface with a graphing function in a handwriting application of an electronic whiteboard.”
Thus, an “intelligent graphing mode” is interpreted as any state in which a device is capable of generating a graph based on a user’s input, whether that may be through a graphical user interface and/or via gesture/handwriting recognition.
This definition is used for purposes of searching for prior art, but cannot be incorporated into the claims. Should applicant wish different definitions, Applicant should point to the portions of the specification that clearly show a different definition.
Under MPEP 2143.03, "All words in a claim must be considered in judging the patentability of that claim against the prior art." In re Wilson, 424 F.2d 1382, 1385, 165 USPQ 494, 496 (CCPA 1970). As a general matter, the grammar and ordinary meaning of terms as understood by one having ordinary skill in the art used in a claim will dictate whether, and to what extent, the language limits the claim scope. Language that suggests or makes a feature or step optional but does not require that feature or step does not limit the scope of a claim under the broadest reasonable claim interpretation. In addition, when a claim requires selection of an element from a list of alternatives, the prior art teaches the element if one of the alternatives is taught by the prior art. See, e.g., Fresenius USA, Inc. v. Baxter Int’l, Inc., 582 F.3d 1288, 1298, 92 USPQ2d 1163, 1171 (Fed. Cir. 2009).
Claim 100 recites “at least one of” then lists “an arrow, a drawing command indicating a circle, a drawing command indicating a polygon, a drawing command indicating a cylinder, or a drawing command indicating a rectangle”. Since “at least one of” is disjunctive, any one of the elements found in the prior art is sufficient to reject the claim. On balance, it appears the disjunctive interpretation enjoys the most specification support and for that reason the disjunctive interpretation (one of A, B OR C) is being adopted for the purposes of this Office Action. Applicant’s comments and/or amendments relating to this issue are invited to clarify the claim language and the prosecution history.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 90, 95, 99, 100, 115, and 116 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chan et al. (US 20160350951 A1, hereinafter "Chan") in view of Yang et al. (CN 112394859 A, hereinafter "Yang") and W3Schools ("HTML <input> placeholder Attribute", retrieved from Wayback Machine, 01/02/2020: https://web.archive.org/web/20200102144341/https://www.w3schools.com/tags/att_input_placeholder.asp).
Regarding claim 90, Chan discloses: a display apparatus (fig. 1, [0035] “The electronic device 100 at least includes a processing unit 110, a display unit 120, a touch unit 130 and a storage unit 140”), comprising a display screen (display unit 120) and a control circuit (processing unit 110 and storage unit 140, [0036] “The processing unit 110 is coupled to the display unit 120, the touch unit 130 and the storage unit 140.”), wherein
the display screen is configured for content display ([0037] “The display unit 120 is, for example, a cathode ray tube (CRT) display, a liquid crystal display (LCD), a plasma display or the like.”, [0045] “…the processing unit 110 displays a chart corresponding to the predefined chart type in the display unit 120…”); and
the control circuit comprises a processor (processing unit 110) and a memory (storage unit 140), wherein the memory is configured for storing programs executable by the processor, and the processor is configured to read the programs in the memory ([0040] “The storage unit 140 is, for example, a fixed or a movable device in any possible foil is including a random access memory (RAM), a read-only memory (ROM), a flash memory, a hard drive or other similar devices, or a combination of the above-mentioned devices. Herein, a computer program product is stored in the storage unit 140. Basically, the computer program product is assembled by a plurality of program sections (i.e. building an organization diagram program section, approving a list program section, setting a program section, and deploying a program section). Moreover, after the program sections are loaded and executed by the electronic device 100, the chart drawing method as described below may be executed by the processing unit 110.”) to perform:
recognizing writing trajectory information in a display area of the display screen and obtaining a data recognition result ([0007] “The chart drawing method includes receiving a drawing instruction, and enabling a drawing operation according to the drawing instruction; receiving a first hand-painted trajectory, and recognizing whether the first hand-painted trajectory matches a predefined chart type”, [0013] “In an embodiment of the invention, after the step of dividing the chart into the areas according to the second hand-painted trajectory, the chart drawing method further includes: receiving a hand-written value, and recognizing the hand-written value as a digitized text; and adjusting sizes of the areas according to the digitized text.”);
in response to a graph drawing command from a user, determining a graph type corresponding to the graph drawing command (fig. 2, [0044] “Referring back to FIG. 2, in step S210, after the drawing operation is enabled, the processing unit 110 may receive a hand-painted trajectory (a first hand-painted trajectory) of the user, and recognize whether the first hand-painted trajectory matches a predefined chart type… The predefined chart type is, for example, a circle, a triangle and a rectangle, and their corresponding charts are a pie chart, a pyramid chart and a table, respectively.”, also see fig. 4A to 4C and para. [0047] for examples); and
drawing a graph of the graph type corresponding to the graph drawing command according to the data recognition result, and displaying the drawn graph in the display area (fig. 2, [0045] “In step S215, when the first hand-painted trajectory matches the predefined chart type, the processing unit 110 displays a chart corresponding to the predefined chart type in the display unit 120, and clears the first hand-painted trajectory. For example, when it is recognized that the first hand-painted trajectory matches the circle, the processing unit 110 displays a pie chart frame corresponding to the predefined chart type in the display unit 120, and clears the first hand-painted trajectory. As another example, when it is recognized that the first hand-painted trajectory matches the rectangle, the processing unit 110 displays a table frame corresponding to the predefined chart type in the display unit 120, and clears the first hand-painted trajectory. As yet another example, when it is recognized that the first hand-painted trajectory matches the triangle, the processing unit 110 displays a pyramid chart frame corresponding to the predefined chart type in the display unit 120, and clears the first hand-painted trajectory.”).
wherein the processor is further configured for:
displaying a writing prompt table in the display area, wherein the writing prompt table comprises the cells ([0072] “By performing aforesaid methods repeatedly, the table 1100 may be divided into multiple fields. After the table is drawn, the user may also click on an empty field to edit the selected field in the hand-writing manner.”); and
receiving the writing trajectory information written by the user in the writing prompt table ([0072] “The processing unit 110 recognizes and converts contents from hand-written contents into the digitized contents, and displays the converted digitized contents in the selected field in the display unit 120”).
Chan does not explicitly teach that the writing prompt table is displayed before recognizing the writing trajectory information in the display area of the display screen and obtaining the data recognition result.
Yang teaches displaying a writing prompt table before recognizing the writing trajectory information in the display area of the display screen and obtaining the data recognition result ([0048]-[0049] “The creation of a table in an electronic whiteboard can be achieved through its menu bar, as shown in Figure 1. A menu diagram of an electronic whiteboard is provided. The terminal user clicks the "Smart Graphics" option in the secondary menu bar and selects the "Insert Table" option in the "Smart Graphics". At this time, the electronic whiteboard pops up the property box shown in Figure 2. The terminal user can set the number of columns and rows of the table to be created as needed, and click OK to create the corresponding table.”, [0012] describes receiving the writing trajectory information in the table, which must happen after the table has already been created and displayed to the user).
Chan and Yang are both analogous to the claimed invention because they are in the same field of recognizing handwritten user input in the context of data visualization. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the invention of Chan with the teachings of Yang to display a table to accept handwritten user input before accepting the input. The motivation would have been to add an organized manner of accepting and reading in handwritten user data to use in generating a chart.
The combination of Chan in view of Yang is not relied upon to teach: wherein the writing prompt table is configured to prompt the user to write in various cells of the writing prompt table.
W3Schools teaches: wherein the writing prompt table is configured to prompt the user to write in various cells of the writing prompt table (“The placeholder attribute specifies a short hint that describes the expected value of an input field (e.g. a sample value or a short description of the expected format). The short hint is displayed in the input field before the user enters a value.”; also see included example).
W3Schools is analogous to the claimed invention because it pertains to the same issue of accepting user input to an input cell/box, as does the combination of Chan in view of Yang. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the invention of Chan in view of Yang with the teachings of W3Schools to add placeholder text to the table cells indicating to a user to enter information. The motivation would have been to improve the user experience by adding clear, yet unobtrusive, indications on how they can interact with the software tools.
Regarding claim 95, the combination of Chan in view of Yang and W3Schools teaches: the display apparatus according to claim 90, wherein a quantity of rows and a quantity of columns of the writing prompt table are fixed, or a quantity of rows and a quantity of columns of the writing prompt table are determined according to parameters input by the user (Yang [0048] “The terminal user can set the number of columns and rows of the table to be created as needed, and click OK to create the corresponding table.”).
Chan and Yang are both analogous to the claimed invention because they are in the same field of recognizing handwritten user input in the context of data visualization. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the invention of Chan in view of Yang and W3Schools with the additional teachings of Yang to allow a user to specify the number of rows and columns of the input table. The motivation would have been to improve a user’s control over their desired data input.
Regarding claim 99, the combination of Chan in view of Yang and W3Schools teaches: the display apparatus according to claim 90, wherein the graph drawing command comprises a graph selection command (Chan [0007] “The chart drawing method includes receiving a drawing instruction, and enabling a drawing operation according to the drawing instruction; receiving a first hand-painted trajectory, and recognizing whether the first hand-painted trajectory matches a predefined chart type; when the first hand-painted trajectory matches the predefined chart type, displaying a chart corresponding to the predefined chart type”), and the processor is further configured for determining the graph type as follows:
in response to the graph selection command from the user in an intelligent graphing mode, determining the graph type corresponding to the graph selection command (Chan [0007] “The chart drawing method includes receiving a drawing instruction, and enabling a drawing operation according to the drawing instruction; receiving a first hand-painted trajectory, and recognizing whether the first hand-painted trajectory matches a predefined chart type; when the first hand-painted trajectory matches the predefined chart type, displaying a chart corresponding to the predefined chart type”); and/or
wherein the graph drawing command comprises a first touch gesture (Chan [0039] “Further, the display unit 120 and the touch unit 130 may also be integrated as a touch screen”, [0043] “Referring to FIG. 3, the present embodiment is applied in the electronic device including the touch screen”, [0007] “The chart drawing method includes receiving a drawing instruction, and enabling a drawing operation according to the drawing instruction; receiving a first hand-painted trajectory, and recognizing whether the first hand-painted trajectory matches a predefined chart type”), and the processor is configured for determining the graph type as follows:
in response to the first touch gesture from the user in an intelligent graphing mode, recognizing a first graphic indicated by the first touch gesture, and determining the graph type based on the first graphic (Chan [0007] “…and recognizing whether the first hand-painted trajectory matches a predefined chart type; when the first hand-painted trajectory matches the predefined chart type, displaying a chart corresponding to the predefined chart type…”).
Regarding claim 100, the combination of Chan in view of Yang and W3Schools teaches: the display apparatus according to claim 90, wherein the graph drawing command comprises at least one of a drawing command indicating an arrow, a drawing command indicating a circle (Chan [0012] “In an embodiment of the invention, the predefined chart type corresponding to the first hand-painted trajectory is a circle, and the chart corresponding to the circle is a pie chart.”), a drawing command indicating a polygon (Chan [0017] “In an embodiment of the invention, the predefined chart type corresponding to the first hand-painted trajectory is a triangle, and the chart corresponding to the triangle is a pyramid chart.”), a drawing command indicating a cylinder, or a drawing command indicating a rectangle (Chan [0015] “In an embodiment of the invention, the predefined chart type corresponding to the first hand-painted trajectory is a rectangle, and the chart corresponding to the rectangle is a table.”).
Regarding claim 115, it is rejected using the same references, rationale, and motivation to combine described in the rejection of claim 90.
Regarding claim 116, it is rejected using the same references, rationale, and motivation to combine described in the rejection of claim 90, with the additional limitation of: a non-transitory computer readable storage medium with computer programs stored therein (Chan fig. 1, [0040] “The storage unit 140 is, for example, a fixed or a movable device in any possible foil is including a random access memory (RAM), a read-only memory (ROM), a flash memory, a hard drive or other similar devices, or a combination of the above-mentioned devices. Herein, a computer program product is stored in the storage unit 140. Basically, the computer program product is assembled by a plurality of program sections (i.e. building an organization diagram program section, approving a list program section, setting a program section, and deploying a program section). Moreover, after the program sections are loaded and executed by the electronic device 100, the chart drawing method as described below may be executed by the processing unit 110.”).
Claim(s) 91 and 111 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chan (US 20160350951 A1) in view of Yang (CN 112394859 A) and W3Schools ("HTML <input> placeholder Attribute") as applied to claim 90 above, and further in view of Lee et al. (US 20150015504 A1, hereinafter "Lee").
Regarding claim 91, the combination of Chan in view of Yang and W3Schools teaches the display apparatus according to claim 90, wherein the display apparatus comprises… a touch component (Chan fig. 1 touch unit 130, [0039] “Further, the display unit 120 and the touch unit 130 may also be integrated as a touch screen. The touch screen is, for example, a screen that includes touch control functions (e.g., a resistive type touch screen, a capacitive type touch screen, a wave type touch screen, etc.) or a screen combined with other elements to include the touch control functions.”), and the touch component is configured for obtaining handwriting trajectory information (Chan [0013] “In an embodiment of the invention, after the step of dividing the chart into the areas according to the second hand-painted trajectory, the chart drawing method further includes: receiving a hand-written value, and recognizing the hand-written value as a digitized text; and adjusting sizes of the areas according to the digitized text.”, also see [0064] and [0072] for examples of handwriting recognition).
The combination of Chan in view of Yang and W3Schools does not explicitly teach: wherein the display apparatus comprises an electronic whiteboard, and the electronic whiteboard comprises a touch component.
Lee teaches a display apparatus for displaying a chart based on handwritten input wherein the display apparatus comprises an electronic whiteboard ([0010] “The description relates to interaction with data visualizations on digital displays, such as digital whiteboards.”), and the electronic whiteboard comprises a touch component ([0011] “For the purpose of this discussion, "interactive digital display" can include screens with pen- and multi-touch-enabled input modalities.”, [0035] describes a user operating touch commands on the electronic whiteboard).
Lee and the combination of Chan in view of Yang and W3Schools are both analogous to the claimed invention because they are in the same field of generating a chart based on handwritten user input. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the invention of Chan in view of Yang and W3Schools with the teachings of Lee to implement it on a larger digital display such as a digital whiteboard. The motivation would have been to expand its usage to larger group settings.
Regarding claim 111, the combination of Chan in view of Yang and W3Schools teaches the display apparatus according to claim 90, but does not explicitly teach: wherein after displaying the drawn graph in the display area, the processor is further configured for:
in response to a modification command from the user, modifying corresponding writing trajectory information and updating the graph according to the writing trajectory information after modification.
Lee teaches: wherein after displaying the drawn graph in the display area, the processor is further configured for:
in response to a modification command from the user, modifying corresponding writing trajectory information and updating the graph according to the writing trajectory information after modification (fig. 13, [0037] “FIG. 13 shows the bar chart 1100 and the chart copy 1300 that was created in the example in FIG. 12. As shown in FIG. 13, the user 904 is changing the x-axis label 1302 of the chart copy 1300. The user is changing the x-axis label to "Funding per capita" by simply writing over the existing axis title. In one example, the original axis title can disappear when the user begins to write a new title over the original axis title on the GUI 908. In another example, the original axis title may remain visible until the user finishes writing the new title over it, and indicates acceptance or input of the new title through some form of input… In the example shown in FIG. 13, the new x-axis label is recognized by the interactive digital display and the chart copy 1300 is replaced with a new chart 1400, titled "Funding per Capita by State," as shown in FIG. 14. In this case, in response to the user entering "Funding per capita" as a new x-axis label, the interactive digital display 902 automatically initiated a search for related data and generated new chart 1400 as a new data visualization representing the newly found, related data.”).
Lee and the combination of Chan in view of Yang and W3Schools are both analogous to the claimed invention because they are in the same field of generating a chart based on handwritten user input. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the invention of Chan in view of Yang and W3Schools with the teachings of Lee to allow a chart generated via handwritten input to also be edited via handwritten input. The motivation would have been for convenience; to allow a user to make updates and fix mistakes rather than having to create a new chart.
Claim(s) 93 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chan (US 20160350951 A1) in view of Yang (CN 112394859 A) and W3Schools ("HTML <input> placeholder Attribute") as applied to claim 90 above, and further in view of Kubo (US 20170061665 A1).
Regarding claim 93, the combination of Chan in view of Yang and W3Schools teaches: the display apparatus according to claim 90, and wherein the trajectory information of the first writing from the user is located within a cell of the writing prompt table (Yang [0012] “In an embodiment of the present invention, when it is determined that the starting contact point position of the writing track drawn on the electronic whiteboard is inside the table, it can be determined that the drawn writing track is used to fill the corresponding cell; following the change of the current contact point of the writing track, if the distance between the current contact point and the table boundary is less than a first threshold, it can be determined that the writing track may exceed the boundary of the cell. At this time, the boundary of the cell can be adjusted so that the distance between the writing track and the cell boundary always remains greater than or equal to the first threshold, thereby avoiding the problem of the writing track exceeding the table range and improving the user experience.”).
Yang teaches resizing a cell of a table to fit the user’s input. Chan and Yang are both analogous to the claimed invention because they are in the same field of recognizing handwritten user input in the context of data visualization. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the invention of Chan in view of Yang and W3Schools with the additional teachings of Yang to reshape the table around a user’s initial input, ensuring the user’s writing does not fall outside the boundaries of the table’s cells. The motivation would have been to improve the presentability of the user’s data table for display on an electronic whiteboard.
The combination of Chan in view of Yang and W3Schools does not explicitly teach: wherein the processor is configured for:
determining a baseline position of the writing prompt table according to a position of trajectory information of a first writing from the user in the display area; and
displaying, in the display area, the writing prompt table with a fixed quantity of rows and a fixed quantity of columns according to the baseline position of the writing prompt table.
Kubo teaches wherein the processor is configured for:
determining a baseline position of the writing prompt table according to a position of trajectory information of a first writing from the user in the display area ([0008] “The parameter extraction unit extracts, from a position and length of a segment composing the one polygonal line, a parameter, which corresponds to the specified object creation processing, among parameters which correspond to the one or more pieces of object creation processing and include at least one set of parameters corresponding to the at least one object creation processing out of… a position of the table, a height of a row, a width of a column, a number of rows and a number of columns in the table creation processing”, fig. 12A-12G, [0076] to [0082] describe how the position and dimensions of a table are determined based on the trajectory of a single continuous line drawn by a user); and
displaying, in the display area, the writing prompt table with a fixed quantity of rows and a fixed quantity of columns (fig. 12G, [0081] “When the input of the command 121 is ended, then the number of columns is finalized, and as shown in FIG. 12G, such a table 122 of the finalized number of rows and columns is displayed.”) according to the baseline position of the writing prompt table ([0008] The parameter extraction unit extracts, from a position and length of a segment composing the one polygonal line, a parameter, which corresponds to the specified object creation processing, among parameters which correspond to the one or more pieces of object creation processing and include at least one set of parameters corresponding to the at least one object creation processing out of… a position of the table, a height of a row, a width of a column, a number of rows and a number of columns in the table creation processing.).
The combination of Chan in view of Yang and W3Schools teaches adjusting the position of an individual cell of a table based on handwritten user input within the cell; Kubo teaches determining the position of an entire table based on handwritten user input; it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the teachings of the two inventions to determine the position of a table based on handwritten user input located within one of the cells. The motivation would have been for convenience, allowing a user to immediately start writing anywhere within the display area and automatically generating the table around their writing.
Claim(s) 94 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chan (US 20160350951 A1) in view of Yang (CN 112394859 A) and W3Schools ("HTML <input> placeholder Attribute") as applied to claim 90 above, and further in view of Hudson et al. (US 20130124980 A1, hereinafter "Hudson").
Regarding claim 94, the combination of Chan in view of Yang and W3Schools teaches the display apparatus according to claim 90, but does not explicitly teach: wherein after displaying the writing prompt table, the processor is further configured for:
if trajectory information of continuous writing from the user exceeds a range of the writing prompt table, increasing a quantity of rows and/or a quantity of columns of the writing prompt table according to a position of the trajectory information of the continuous writing; or
if trajectory information of continuous writing from the user is located at a critical row of the writing prompt table, increasing a quantity of rows of the writing prompt table; and/or if trajectory information of continuous writing from the user is located at a critical column of the writing prompt table, increasing a quantity of columns of the writing prompt table.
Hudson teaches: wherein after displaying the writing prompt table, the processor is further configured for:
if trajectory information of continuous writing from the user exceeds a range of the writing prompt table, increasing a quantity of rows and/or a quantity of columns of the writing prompt table according to a position of the trajectory information of the continuous writing; or
if trajectory information of continuous writing from the user is located at a critical row of the writing prompt table, increasing a quantity of rows of the writing prompt table; and/or if trajectory information of continuous writing from the user is located at a critical column of the writing prompt table, increasing a quantity of columns of the writing prompt table ([0214] “Referring back to FIG. 42, a content creator can add a new row by simply adding data to an empty row at the bottom of the table. The table creator can be configured to automatically add a new row at the bottom when all rows have been used, such as row 4214.”).
Hudson and the combination of Chan in view of Yang and W3Schools are both analogous to the claimed invention because they are in the same field of data visualization. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the invention of Chan in view of Yang and W3Schools with the teachings of Hudson to add additional rows to the input table as a user fills the existing ones. The motivation would have been for convenience, automating the process of adding new rows rather than requiring a user to manually perform an operation.
Claim(s) 96 and 110 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chan (US 20160350951 A1) in view of Yang (CN 112394859 A) and W3Schools ("HTML <input> placeholder Attribute") as applied to claim 90 above, and further in view of Stitz et al. (US 20170220858 A1, hereinafter "Stitz").
Regarding claim 96, the combination of Chan in view of Yang and W3Schools teaches: the display apparatus according to claim 90, wherein the data recognition result comprises a plurality of cells (Chan [0070]-[0071] describes dividing a table into multiple cells), and the data recognition result further comprises a data content in each cell (Chan [0072] “By performing aforesaid methods repeatedly, the table 1100 may be divided into multiple fields. After the table is drawn, the user may also click on an empty field to edit the selected field in the hand-writing manner. The processing unit 110 recognizes and converts contents from hand-written contents into the digitized contents, and displays the converted digitized contents in the selected field in the display unit 120.”); wherein the processor is configured for determining the data recognition result as follows:
determining various cells, wherein the various cells correspond to respective cells in a table (Chan [0072] “By performing aforesaid methods repeatedly, the table 1100 may be divided into multiple fields.”), and the writing trajectory information in the various cells corresponds to data contents in the respective cells in the table;
recognizing the writing trajectory information in the various cells and obtaining the data content in each cell (Chan [0072] “After the table is drawn, the user may also click on an empty field to edit the selected field in the hand-writing manner. The processing unit 110 recognizes and converts contents from hand-written contents into the digitized contents”); and
determining the data recognition result based on the data content in each cell (Chan [0072] “The processing unit 110 recognizes and converts contents from hand-written contents into the digitized contents, and displays the converted digitized contents in the selected field in the display unit 120”).
Chan teaches that the contents of each cell of the table are recognized independently; it does not explicitly teach that the data recognition result further comprises a position of each cell, nor does it teach partitioning the writing trajectory information into the various cells or determining the data recognition result based on the position of each cell, nor do Yang or W3Schools explicitly teach these limitations.
Stitz teaches: wherein the data recognition result comprises a plurality of cells ([0036] “Alternatively or additionally, the optical table recognition application 106 may identify the boundaries of the table as well as individual cells within the table”), and the data recognition result further comprises a position of each cell ([0036] “The table optical recognition application 106 may also recognize each cell based on relative positions of data between cells.”) and a data content in each cell ([0036] “It is further understood that the data within cells of the table may also be recognized using known OCR techniques”); wherein the processor is configured for determining the data recognition result as follows:
determining various cells, and partitioning the writing trajectory information into the various cells, wherein the various cells correspond to respective cells in a table, and the writing trajectory information in the various cells corresponds to data contents in the respective cells in the table ([0041] “In an embodiment, table recognition could also be performed using a bordered table or a borderless table approach, as a non-exclusive example. In a bordered table approach, the system, for example, uses clear borders of a table and any structure around the table to understand that the structured data is a table. In such an example, the system recognizes the table and identifies data in the identified cells.”);
recognizing the writing trajectory information in the various cells and obtaining the data content in each cell ([0019] “the present disclosure relates to a method for optically recognizing and identifying a structure of a table including individual cells, columns, rows, headers, etc., as the values stored in the table including, for example, characters (e.g., words and numbers), source (e.g., handwritten or typed), language, font, dates (e.g., long or short style), text alignment, number types, etc.”); and
determining the data recognition result based on the position of each cell and the data content in each cell (fig. 4, [0046] “FIG. 4 illustrates a preview mode 400 displaying a digitized version of the table captured in FIG. 3.”).
Stitz and the combination of Chan in view of Yang and W3Schools are both analogous to the claimed invention because they are in the same field of recognizing handwritten user input in the context of data visualization. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the invention of Chan in view of Yang and W3Schools with the teachings of Stitz to allow the invention to recognize and digitize the content and structure of an entire table. The motivation would have been to enable the invention to generate more complex charts based on formatted data rather than relying on a user to directly input all values to the chart.
Regarding claim 110, the combination of Chan in view of Yang and W3Schools teaches the display apparatus according to claim 90, but does not explicitly teach: wherein the processor is further configured for:
determining a plurality of data groups by grouping a content written by the user in columns; or determining a plurality of data groups by grouping a content written by the user in rows.
Stitz teaches: wherein the processor is further configured for:
determining a plurality of data groups by grouping a content written by the user in columns; or determining a plurality of data groups by grouping a content written by the user in rows ([0037] “In addition to identifying columns, rows, and the general boundaries of a table, the optical recognition application 106 may identify other features of a table structure such as, for example, merged areas, cell alignment, column or row headers, formatting, borders, shading, cell effects, font, and styling.”, [0039] “In other embodiments, the optical recognition application 106 may recognize patterns in the data. For example, a row that captures the totals of one or more columns may be recognized by the optical recognition application 106 based on, for example, summing up each of the values in that column to arrive at a particular number provided in the final row, and further recognizing that adjacent cells in that row are also totals. Alternatively or additionally, the optical recognition application 106 may identify formulas based on identifiers in the table. For example, the table may recite the word “Total,” which provides a clue to the optical recognition application 106 that the corresponding row or column is a total row or column and therefore has applied thereon a sum function. In such embodiments, the optical recognition application 106 may recognize a formula.”).
Stitz and the combination of Chan in view of Yang and W3Schools are both analogous to the claimed invention because they are in the same field of recognizing handwritten user input in the context of data visualization. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the invention of Chan in view of Yang and W3Schools with the teachings of Stitz to recognize and categorize handwritten input to a table by rows and columns. The motivation would have been to enable the invention to generate more complex charts based on formatted data rather than relying on a user to directly input all values to the chart.
Claim(s) 97 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chan (US 20160350951 A1) in view of Yang (CN 112394859 A) and W3Schools ("HTML <input> placeholder Attribute") as applied to claim 90 above, and further in view of Lee (US 20150015504 A1) and Edgecomb et al. (US 20090021495 A1, hereinafter "Edgecomb").
Regarding claim 97, the combination of Chan in view of Yang and W3Schools teaches the display apparatus according to claim 90, but does not explicitly teach: wherein before recognizing the writing trajectory information in the display area of the display screen, and obtaining the data recognition result, the processor is further configured for:
displaying a title prompt in the display area, wherein the title prompt is configured to indicate a writing position of a table title for the user; and
obtaining first writing trajectory information in a first area corresponding to the title prompt, and obtaining second writing trajectory information in a second area corresponding to the title prompt.
Lee teaches: wherein before recognizing the writing trajectory information in the display area of the display screen, and obtaining the data recognition result, the processor is further configured for: displaying a title prompt in the display area, wherein the title prompt is configured to indicate a writing position of a table title for the user ([0029] “In some implementations, once the user roughly scribes the vertical stroke 912 and the horizontal stroke 914, the interactive digital display 902 can automatically overlay input areas (not shown) for labeling the axes.”); and
obtaining first writing trajectory information in a first area corresponding to the title prompt ([0029] “In some implementations, once the user roughly scribes the vertical stroke 912 and the horizontal stroke 914, the interactive digital display 902 can automatically overlay input areas (not shown) for labeling the axes. The user can then simply add the labels by typing, writing, or voice command, into the input areas.”).
Lee and the combination of Chan in view of Yang and W3Schools are both analogous to the claimed invention because they are in the same field of generating a chart based on handwritten user input. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the invention of Chan in view of Yang and W3Schools with the teachings of Lee to add fields to allow a user to input axis titles for the chart via handwriting. The motivation would have been to improve the presentability of the user’s data table for display on an electronic whiteboard, or to add a manner in which a chart may be automatically populated by data columns which are selected based on recognition of the written labels, as taught by Lee (see [0031]).
The combination of Chan in view of Yang and W3Schools and further in view of Lee does not explicitly teach: obtaining second writing trajectory information in a second area corresponding to the title prompt.
Edgecomb teaches obtaining second writing trajectory information in a second area corresponding to the title prompt ([0037] “A message may be written on one or multiple pieces or types of writing surfaces 50. Previously-written information can be tagged to be included in the message. For example, the user may have previously written on another sheet of paper, and this writing could be stored on the smart pen 100. Within the smart pen 100 application that allows for the communication features described herein, the user may invoke an attach function, which would then prompt the user to select, e.g., opposite corners of a box that contains the writing. The user could then make the appropriate selections to define a box around the writing on the other piece of paper that the user desires to attach, and the smart pen computing system 100 would attach the writing from its memory 250 that is associated with the selected area to the communication. The user may also select a metatag that relates to a particular section and send that. For example, by selecting the title of a previous note session (written in a prespecified title field), all notes that fall within that session are selected.”; notes from note session are associated with title field).
Edgecomb and the combination of Chan in view of Yang and W3Schools and further in view of Lee both pertain to the issue of obtaining and organizing handwritten data from a user; therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the invention of Chan in view of Yang and W3Schools and further in view of Lee with the teachings of Edgecomb to include additional input fields which are associated with the title field. The motivation would have been to group together related data inputs, for instance everything pertaining to one particular chart, so that they can be easily selected and manipulated together, as taught by Edgecomb.
Claim(s) 98 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chan (US 20160350951 A1) in view of Yang (CN 112394859 A) and W3Schools ("HTML <input> placeholder Attribute") and further in view of Lee (US 20150015504 A1) and Edgecomb (US 20090021495 A1) as applied to claim 97 above, and further in view of Stitz (US 20170220858 A1).
Regarding claim 98, the combination of Chan in view of Yang and W3Schools and further in view of Lee and Edgecomb teaches: the display apparatus according to claim 97, wherein the processor is configured for determining the data recognition result as follows:
recognizing the writing trajectory information in the first area to obtain a title of each coordinate axis (Lee [0030] “Once the user 904 has labeled the vertical stroke 912 and the horizontal stroke 914 the interactive digital display 902 can cause a bar chart 1100 to appear, as shown in FIG. 11. In this case, the interactive digital display can recognize the vertical stroke 912, the horizontal stroke 914, and the axis labels as suggestions to be axes for a chart.”), and determining the data recognition result based on the title of each coordinate axis and a position relationship involving the title of each coordinate axis (Lee fig. 10-11, [0030] “The interactive digital display can automatically draw machine-generated axes, labels, and other appropriate chart designators as shown in FIG. 11.”, [0031] and [0130] describe how data columns are selected for each axis based on recognition of the associated axis labels).
The combination of Chan in view of Yang and W3Schools and further in view of Lee and Edgecomb does not explicitly teach: partitioning the writing trajectory information in the second area into the various cells, recognizing the writing trajectory information in the various cells, and obtaining a data content in each cell of the second area; and
determining the data recognition result based on the data content in each cell of the second area and a position relationship between the title of each coordinate axis and each cell.
Stitz teaches: partitioning the writing trajectory information in the second area into the various cells, recognizing the writing trajectory information in the various cells, and obtaining a data content in each cell of the second area ([0036] Alternatively or additionally, the optical table recognition application 106 may identify the boundaries of the table as well as individual cells within the table, either before or after capturing an image of the table without the use of a selection area 302. In some embodiments, the optical table recognition application 106 may recognize the generally linear horizontal and vertical lines that comprise a table. The table optical recognition application 106 may further identify the column headers based on shading, font size, recognition of a top or outermost cell, etc. The table optical recognition application 106 may also recognize each cell based on relative positions of data between cells. Tables are generally aligned such that data in a single column or a single row align. Accordingly, an image technique that separates data between columns and rows to identify distinct cells of the table may also be used… Accordingly, using enhanced image identification techniques, an optical recognition application 106 may recognize table boundaries, individual cells, and data within the table. It is understood that such image recognition techniques may also be employed with the selection area 302 implementation. It is further understood that the data within cells of the table may also be recognized using known OCR techniques.); and
determining the data recognition result based on the data content in each cell of the second area and a position relationship between the title of each coordinate axis and each cell ([0037] In addition to identifying columns, rows, and the general boundaries of a table, the optical recognition application 106 may identify other features of a table structure such as, for example, merged areas, cell alignment, column or row headers, formatting, borders, shading, cell effects, font, and styling.).
Stitz and the combination of Chan in view of Yang and W3Schools and further in view of Lee and Edgecomb are both analogous to the claimed invention because they are in the same field of recognizing handwritten user input in the context of data visualization. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the invention of Chan in view of Yang and W3Schools and further in view of Lee and Edgecomb, which teaches generating a chart based on a combination of hand-written input recognition and pre-formatted digital data tables, with the teachings of Stitz, which teaches recognizing hand-written tables and converting them to the type of digital data table used by the combination of Chan in view of Yang and W3Schools and further in view of Lee and Edgecomb, to allow the invention to recognize and digitize the content and structure of an entire handwritten table to use as the basis for generating a chart. The motivation would have been to enable the invention to generate more complex charts based on formatted data, while still maintaining the invention’s handwriting-based user interface.
Claim(s) 101 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chan (US 20160350951 A1) in view of Yang (CN 112394859 A) and W3Schools ("HTML <input> placeholder Attribute") as applied to claim 99 above, and further in view of Lee (US 20150015504 A1) and Mirra et al. (US 20130332387 A1, hereinafter "Mirra").
Regarding claim 101, the combination of Chan in view of Yang and W3Schools teaches: the display apparatus according to claim 99, but does not explicitly teach:
wherein the data recognition result comprises two columns of cells, a drawing command indicating an arrow comprises a command for a first arrow and a command for a second arrow which are drawn in turn; the command for the first arrow corresponds to a first column in the two columns, and the command for the second arrow corresponds to a second column in the two columns;
and the processor is configured for:
if a first direction indicated by the command for the first arrow is an X-axis direction, determining that cells in the first column are for data on the X-axis of the graph, and cells in the second column are for data on a Y-axis of the graph;
if a first direction indicated by the command for the first arrow is the Y-axis direction, determining that cells in the first column are for data on the Y-axis of the graph, and cells in the second column are for data on the X-axis of the graph;
wherein the command for the first arrow is received earlier than the command for the second arrow, and a second direction indicated by the command for the second arrow is different from the first direction.
Lee teaches: wherein the data recognition result comprises two columns of cells, a drawing command indicating an arrow comprises a command for a first arrow and a command for a second arrow which are drawn in turn ([0127] “Arrows: users can begin the charting process by drawing two arrows (each input individually) for axes. In one design, single-stroke arrows can be utilized for performance reasons. Arrow annotations 4426 are maintained by an arrow recognizer, which "listens" (or observes) for raw strokes shaped like arrows.”); the command for the first arrow corresponds to a first column in the two columns, and the command for the second arrow corresponds to a second column in the two columns ([0128] “Charts: upon recognizing two (nearly) intersecting arrows as axes, the system creates a chart annotation for that chart. Within this structure is stored the semantic information for the chart, including the backend data sets loaded by the user, logical placement of x- and y-axis tic marks on the axes, and which columns are loaded into which axes.”);
wherein the command for the first arrow is received earlier than the command for the second arrow ([0127] “Arrows: users can begin the charting process by drawing two arrows (each input individually) for axes”; if inputs are independent, then one must be drawn before the other), and a second direction indicated by the command for the second arrow is different from the first direction (fig. 9, [0028] “In this example, initially, the user scribes (sketches) a vertical stroke 912 and an intersecting horizontal stroke 914 (or vice versa), the intersection of which is recognized by a chart recognizer as x-y axes for a chart.”).
Lee and the combination of Chan in view of Yang and W3Schools are both analogous to the claimed invention because they are in the same field of generating a chart based on handwritten user input. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the invention of Chan in view of Yang and W3Schools with the teachings of Lee to add an additional chart type, where the associated gesture is two arrows representing x and y axes, which is capable of loading in predetermined data. The motivation would have been to expand the functionality of the invention of Chan in view of Yang and W3Schools, allowing a user to easily generate more complex charts by loading in larger datasets rather than having to manually input each value.
The combination of Chan in view of Yang and W3Schools and further in view of Lee does not explicitly teach: the processor is configured for:
if a first direction indicated by the command for the first arrow is an X-axis direction, determining that cells in the first column are for data on the X-axis of the graph, and cells in the second column are for data on a Y-axis of the graph;
if a first direction indicated by the command for the first arrow is the Y-axis direction, determining that cells in the first column are for data on the Y-axis of the graph, and cells in the second column are for data on the X-axis of the graph;
Mirra teaches determining the order in which data columns are read into a display based on user input ([0066] “Referring again to FIG. 4, in an embodiment, selecting an Edit Columns widget 416 causes view computation unit 106 to display a GUI widget that may receive reconfiguration of data values that determine the identity and order of columns of the table view 408. FIG. 7 illustrates an example Edit Columns dialog 702 that displays a list of currently selected columns 706 and a tree representation of available columns 704. A comparison of selected columns 706 to FIG. 4 will show that the selected columns of FIG. 7 are represented in FIG. 4.”, [0068] discusses generating graphs from the table data described in [0066]).
Mirra and the combination of Chan in view of Yang and W3Schools and further in view of Lee are both analogous to the claimed invention because they are both in the same field of generating data visualizations. The combination of Chan in view of Yang and W3Schools and further in view of Lee teaches drawing the X and Y axes of a chart in a particular order and selecting a particular data column to be associated with each axis; Mirra teaches selecting two data columns in a particular order based on user input. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine these teachings to select data columns to be associated with the X and Y axes based on the order in which the axes are drawn. The motivation would have been for convenience; removing the need for a user to manually input the data column selections for the axes if there are only two columns available.
Claim(s) 102 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chan (US 20160350951 A1) in view of Yang (CN 112394859 A) and W3Schools ("HTML <input> placeholder Attribute") as applied to claim 99 above, and further in view of Golden Software ("Grapher: How to use the Graph Wizard").
Regarding claim 102, the combination of Chan in view of Yang and W3Schools teaches the display apparatus according to claim 99, but does not explicitly teach: wherein the processor is further configured for:
in response to a command for enabling an intelligent graphing mode from the user on a first display interface in a writing application, displaying a second display interface in the intelligent graphing mode in the display area; and
after drawing a graph in the second display interface and displaying the drawn graph, in response to a command for insertion from the user, inserting the graph in the second display interface into the first display interface of the writing application for display.
Golden Software teaches: wherein the processor is further configured for:
in response to a command for enabling an intelligent graphing mode from the user on a first display interface in a writing application (0:21, selecting “Wizard” icon on the Home / New Graph toolbar), displaying a second display interface in the intelligent graphing mode in the display area (0:30, Graph Wizard interface appears as an overlay on top of the main document); and
after drawing a graph in the second display interface and displaying the drawn graph (5:42, Graph Wizard interface displays a preview of the graph), in response to a command for insertion from the user (6:45, clicking “Finish” in Graph Wizard interface), inserting the graph in the second display interface into the first display interface of the writing application for display (6:46, graph which was previously displayed in Graph Wizard interface is inserted into the main document).
Golden Software and the combination of Chan in view of Yang and W3Schools are both analogous to the claimed invention because they are in the same field of generating data visualizations. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the invention of Chan in view of Yang and W3Schools with the teachings of Golden Software to cause a chart drawing command to open a separate user interface for inserting a chart instead of immediately inserting the chart into the main display area. The motivation would have been to add an opportunity for the user to customize the chart and specify parameters for the chart before inserting it into the main display area, as taught by Golden Software.
Claim(s) 103 and 104 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chan (US 20160350951 A1) in view of Yang (CN 112394859 A) and W3Schools ("HTML <input> placeholder Attribute") and further in view of Golden Software ("Grapher: How to use the Graph Wizard") as applied to claim 102 above, and further in view of Lee (US 20150015504 A1).
Regarding claim 103, the combination of Chan in view of Yang and W3Schools and further in view of Golden Software teaches the display apparatus according to claim 102, wherein the command for enabling the intelligent graphing mode comprises a second touch gesture (Chan [0043] “When the user 350 intends to insert a chart in the edit area 310, the user may click on the icon 324 representing the chart drawing function mode in the tool bar 320 in order to generate the drawing instruction. After the drawing instruction is received, the processing unit 110 may enable the drawing operation according to the drawing instruction, so that the user 350 may insert the chart in the edit area 310 of the electronic note interface 300 and edit the chart.”, [0038] “a user may click or slide on the touch unit 130 by using fingers, a stylus or various input devices in order to generate an input signal.”), and the processor is further configured for:
receiving a second graphic drawn by the user on the first display interface of the writing application, wherein the second graphic is generated according to the second touch gesture (Chan [0044] “…the processing unit 110 may receive a hand-painted trajectory (a first hand-painted trajectory) of the user, and recognize whether the first hand-painted trajectory matches a predefined chart type.”); and
triggering the second display interface in the intelligent graphing mode to be displayed in the display area (Golden Software, 0:21, selecting “Wizard” icon on the Home / New Graph toolbar, 0:30, Graph Wizard interface appears as an overlay on top of the main document) according to the second graphic (Chan [0049] “In addition, when the first hand-painted trajectory does not have the corresponding predefined chart type, a chart menu is displayed in the display unit 120 for the user to select one chart option from the chart menu.”).
Chan and Golden Software are both analogous to the claimed invention because they are in the same field of generating data visualizations. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the touch gestures of Chan with the separate graph drawing interface of Golden Software to open the graph drawing interface via a touch gesture, with the touch gesture acting as a parameter input for the graph drawing interface. The motivation would have been to add an opportunity for the user to customize the chart and specify parameters for the chart before inserting it into the main display area, as taught by Golden Software.
The combination of Chan in view of Yang and W3Schools and further in view of Golden Software teaches gestures for enabling the “intelligent graphing mode”, generating the “second graphic”, and triggering the “second display interface”; however, it does not teach that these are performed using the same “second touch gesture”.
Lee teaches a user triggering a chart recognizer (“enabling the intelligent graphing mode”) by drawing x-y axes, which also generates an initial bar chart and enables the user to access a menu: ([0028] “the user scribes (sketches) a vertical stroke 912 and an intersecting horizontal stroke 914 (or vice versa), the intersection of which is recognized by a chart recognizer as x-y axes for a chart”, [0030] “Once the user 904 has labeled the vertical stroke 912 and the horizontal stroke 914 the interactive digital display 902 can cause a bar chart 1100 to appear, as shown in FIG. 11.”, [0035] “In some implementations, the user can touch and hold the bar chart (leave his index finger resting on the surface of the screen) to reveal a toolbar 1202… The toolbar 1202 can contain icons.”).
Lee and the combination of Chan in view of Yang and W3Schools and further in view of Golden Software are both analogous to the claimed invention because they are in the same field of generating data visualizations based on handwritten user input. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Chan in view of Yang and W3Schools and further in view of Golden Software with the teachings of Lee to combine the gestures to enable the graphing mode, generate a graphic for chart selection, and open the separate graphing interface into a single gesture to perform all three simultaneously. The motivation would have been to streamline the user experience.
Regarding claim 104, the combination of Chan in view of Yang and W3Schools and further in view of Golden Software and Lee teaches the display apparatus according to claim 103, wherein after receiving the second graphic drawn by the user on the first display interface of the writing application, the processor is further configured for:
generating a menu of candidates in the first display interface (Chan [0049] “In addition, when the first hand-painted trajectory does not have the corresponding predefined chart type, a chart menu is displayed in the display unit 120 for the user to select one chart option from the chart menu. The chart menu includes one or more chart options. For example, the chart menu include the chart options of the pie chart, the pyramid chart, the table, a bar chart, but contents of the chart options are not limited to the above.”), wherein the menu of candidates includes a first startup icon corresponding to the second graphic (Lee, user draws x-y axes to generate a bar chart, which enables the user to access a menu containing a bar chart icon: fig. 14, [0028] “the user scribes (sketches) a vertical stroke 912 and an intersecting horizontal stroke 914 (or vice versa), the intersection of which is recognized by a chart recognizer as x-y axes for a chart”, [0030] “Once the user 904 has labeled the vertical stroke 912 and the horizontal stroke 914 the interactive digital display 902 can cause a bar chart 1100 to appear, as shown in FIG. 11.”, [0035] “In some implementations, the user can touch and hold the bar chart (leave his index finger resting on the surface of the screen) to reveal a toolbar 1202… The toolbar 1202 can contain icons.”, [0036] “Other icons (shown but not designated in FIG. 12) can include icons for a vertical bar chart, a horizontal bar chart, a table, a line chart, a scatter plot, a pie chart, a map, and/or filtering data, among others.”); and
in response to a touch on the first startup icon from the user (Golden Software 0:21, selecting “Wizard” icon on the Home / New Graph toolbar), triggering the second display interface (Golden Software 0:30, Graph Wizard interface appears as an overlay on top of the main document) for a graph type (Golden Software 1:43, “Select Plot Type” interface in Graph Wizard) corresponding to the second graphic (Lee [0028]-[0036], user draws gesture for a bar chart, menu includes bar chart as an option) to be displayed in the display area.
Chan, Golden Software, and Lee are all analogous to the claimed invention because they are in the same field of generating data visualizations. Chan teaches generating a menu containing icons for multiple chart types after a user’s gesture input; Lee teaches a menu including an icon corresponding to the user’s gesture input; Golden Software teaches opening a separate graphing interface based on a user’s input, where the interface allows a user to specify a graph type. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine these aspects of these three inventions to open a menu based on a user’s gesture input where the menu contains an icon corresponding to the gesture input, and selecting the icon opens a graphing interface which also contains option(s) for the chart type corresponding to the gesture input. The motivation would have been to streamline and combine multiple systems in a manner that both maximizes user convenience and customization options.
Claim(s) 106 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chan (US 20160350951 A1) in view of Yang (CN 112394859 A) and W3Schools ("HTML <input> placeholder Attribute") as applied to claim 90 above, and further in view of Yoshizawa (US 20130082953 A1).
Regarding claim 106, the combination of Chan in view of Yang and W3Schools teaches the display apparatus according to claim 90, wherein before responding to the graph drawing command, the processor is further configured for:
determining a writing area and a graph area in the display area of the display screen, wherein the writing area is configured to receive the writing trajectory information from the user (Chan fig. 1 display unit 120 and touch unit 130, [0039] “Further, the display unit 120 and the touch unit 130 may also be integrated as a touch screen.”, [0062] “in the present embodiment, after a hand-painted trajectory (the first hand-painted trajectory) 810 of the user is received by the touch unit 130, the processing unit 110 may recognize the predefined chart type corresponding to the hand-painted trajectory 810…”), and the graph area is configured to display the graph corresponding to the writing trajectory information (Chan fig. 2, [0045] “In step S215, when the first hand-painted trajectory matches the predefined chart type, the processing unit 110 displays a chart corresponding to the predefined chart type in the display unit 120, and clears the first hand-painted trajectory. For example, when it is recognized that the first hand-painted trajectory matches the circle, the processing unit 110 displays a pie chart frame corresponding to the predefined chart type in the display unit 120, and clears the first hand-painted trajectory. As another example, when it is recognized that the first hand-painted trajectory matches the rectangle, the processing unit 110 displays a table frame corresponding to the predefined chart type in the display unit 120, and clears the first hand-painted trajectory. As yet another example, when it is recognized that the first hand-painted trajectory matches the triangle, the processing unit 110 displays a pyramid chart frame corresponding to the predefined chart type in the display unit 120, and clears the first hand-painted trajectory.”).
Chan does not explicitly teach that the writing area and graph area are separate areas; the graph is displayed in the same area where the writing trajectory is received. Neither Yang nor W3Schools explicitly teach this limitation either.
Yoshizawa teaches a display apparatus with a separate input area and graph display area (fig.1, [0027] “In FIG. 1, on the touch-panel display module 14 of the graph function electronic calculator 10, a calculator screen G composed of an expression input area ga and a graph display area gb is displayed.”).
Yoshizawa and the combination of Chan in view of Yang and W3Schools are both analogous to the claimed invention because they are in the same field of generating data visualizations. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Chan in view of Yang and W3Schools with the teachings of Yoshizawa to split the touch screen of Chan into two areas, one for user input and one to display outputted charts. The motivation would have been to improve the user interface, allowing a user to write freely without the confusion of potentially overlapping or being overwritten by a chart generated from the written input.
Claim(s) 107 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chan (US 20160350951 A1) in view of Yang (CN 112394859 A) and W3Schools ("HTML <input> placeholder Attribute") and further in view of Yoshizawa (US 20130082953 A1) as applied to claim 106 above, and further in view of Adcock ("How to quickly make multiple charts in excel").
Regarding claim 107, the combination of Chan in view of Yang and W3Schools and further in view of Yoshizawa teaches the display apparatus according to claim 106, but does not explicitly teach: wherein after displaying the drawn graph in the graph area, the processor is further configured for:
in response to a first partition command from the user, dividing the writing area into a first writing area and a second writing area, and dividing the graph area into a first graph area and a second graph area;
wherein the first writing area is configured to display writing trajectory information received before area division, the second writing area is configured to display writing trajectory information received in the second writing area after area division, the first graph area is configured to display a graph corresponding to writing trajectory information in the first writing area, and the second graph area is configured to display a graph corresponding to writing trajectory information in the second writing area.
Adcock teaches wherein after displaying the drawn graph in the graph area, the processor is further configured for:
in response to a first partition command from the user (0:23 duplicate chart, 1:10 reassign input data), dividing the writing area into a first writing area and a second writing area (first area: columns A and B; second area: columns A and C), and dividing the graph area into a first graph area and a second graph area (first and second graphs displayed to the right);
wherein the first writing area is configured to display writing trajectory information received before area division, the second writing area is configured to display writing trajectory information received in the second writing area after area division (both writing areas may display user input from both before and after area division), the first graph area is configured to display a graph corresponding to writing trajectory information in the first writing area, and the second graph area is configured to display a graph corresponding to writing trajectory information in the second writing area (1:22 graphs “CFC-11” and “CFC-113” correspond to respective input areas).
Adcock and the combination of Chan in view of Yang and W3Schools and further in view of Yoshizawa are both analogous to the claimed invention because they are in the same field of generating data visualizations. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Chan in view of Yang and W3Schools and further in view of Yoshizawa with the teachings of Adcock to partition the input and output areas of the touch screen into two sections each, with one output area corresponding to each input area. The motivation would have been to expand the capabilities of the user interface, allowing for two charts and two sets of data input to be processed and displayed simultaneously.
Claim(s) 108 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chan (US 20160350951 A1) in view of Yang (CN 112394859 A) and W3Schools ("HTML <input> placeholder Attribute") as applied to claim 90 above, and further in view of Golden Software ("Grapher: How to use the Graph Wizard").
Regarding claim 108, the combination of Chan in view of Yang and W3Schools teaches the display apparatus according to claim 90, wherein the processor is further configured for:
in the intelligent graphing mode, in response to a gesture command from the user, triggering to draw the graph (Chan [0045] “In step S215, when the first hand-painted trajectory matches the predefined chart type, the processing unit 110 displays a chart corresponding to the predefined chart type in the display unit 120”).
Chan in view of Yang and W3Schools teaches a gesture command that simultaneously determines the graph type and triggers drawing the graph; therefore, it does not explicitly teach: wherein the processor is further configured for:
in the intelligent graphing mode with the graph type being determined, in response to a gesture command from the user, triggering to draw the graph.
Golden Software teaches a user input to trigger drawing a graph after the graph type has already been determined (1:43, graph type is determined in “Select Plot Type” step of Graph Wizard, 5:42 initial graph preview is generated, 6:45 user clicks “Finish” to draw the final version of the graph in the main document; see included images 3, 4, and 5 if video is inaccessible)
Golden Software and the combination of Chan in view of Yang and W3Schools are both analogous to the claimed invention because they are in the same field of generating data visualizations. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the invention of Chan in view of Yang and W3Schools with the teachings of Golden Software to cause a chart drawing command to open a separate user interface for inserting a chart instead of immediately inserting the chart into the main display area. The motivation would have been to add an opportunity for the user to customize the chart and specify parameters for the chart before inserting it into the main display area, as taught by Golden Software.
Claim(s) 109 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chan (US 20160350951 A1) in view of Yang (CN 112394859 A) and W3Schools ("HTML <input> placeholder Attribute") and further in view of Golden Software ("Grapher: How to use the Graph Wizard") as applied to claim 108 above, and further in view of Lee (US 20150015504 A1).
Regarding claim 109, the combination of Chan in view of Yang and W3Schools and further in view of Golden Software teaches the display apparatus according to claim 108, but does not explicitly teach: wherein if the gesture command is a gesture command for an arrow, wherein the processor is further configured for:
determining table data corresponding to an X-axis according to a direction of the arrow in the gesture command; or determining table data corresponding to a Y-axis according to a direction of the arrow in the gesture command.
Lee teaches: wherein if the gesture command is a gesture command for an arrow ([0127] “Arrows: users can begin the charting process by drawing two arrows (each input individually) for axes… Arrow annotations 4426 are maintained by an arrow recognizer, which "listens" (or observes) for raw strokes shaped like arrows.”), wherein the processor is further configured for:
determining table data corresponding to an X-axis according to a direction of the arrow in the gesture command; or determining table data corresponding to a Y-axis according to a direction of the arrow in the gesture command ([0030] “Once the user 904 has labeled the vertical stroke 912 and the horizontal stroke 914 the interactive digital display 902 can cause a bar chart 1100 to appear, as shown in FIG. 11. In this case, the interactive digital display can recognize the vertical stroke 912, the horizontal stroke 914, and the axis labels as suggestions to be axes for a chart. The interactive digital display can automatically draw machine-generated axes, labels, and other appropriate chart designators as shown in FIG. 11.”, [0031] and [0130] describe how the processor selects table data based on the user’s axis labels).
Lee and the combination of Chan in view of Yang and W3Schools and further in view of Golden Software are both analogous to the claimed invention because they are in the same field of generating data visualizations based on handwritten user input. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Chan in view of Yang and W3Schools and further in view of Golden Software with the teachings of Lee to add an additional chart type, where the associated gesture is two arrows representing x and y axes, which is capable of loading in predetermined data. The motivation would have been to expand the functionality of the invention of Chan in view of Yang and W3Schools and further in view of Golden Software, allowing a user to easily generate more complex charts by loading in larger datasets rather than having to manually input each value.
Claim(s) 112 and 113 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chan (US 20160350951 A1) in view of Yang (CN 112394859 A) and W3Schools ("HTML <input> placeholder Attribute") and further in view of Lee (US 20150015504 A1) as applied to claim 111 above, and further in view of DVC ("plots diff" documentation).
Regarding claim 112, the combination of Chan in view of Yang and W3Schools and further in view of Lee teaches the display apparatus according to claim 111, but does not explicitly teach: wherein after responding to the modification command from the user, the processor is further configured for:
determining a first content before modification corresponding to the modification command and a second content after modification corresponding to the modification command; and
displaying the first content and the second content simultaneously, wherein display styles for the first content and the second content are different.
DVC teaches: wherein after responding to the modification command from the user, the processor is further configured for:
determining a first content before modification corresponding to the modification command and a second content after modification corresponding to the modification command (‘Description’ section: “revisions are Git commit hashes, tags, or branch names. If none are specified, dvc plots diff compares plots currently present in the workspace (uncommitted changes) with their latest commit (required). A single specified revision results in comparing the workspace and that version.”, where the latest commit is the content before modification and the current workspace version with uncommitted changes is the version after modification); and
displaying the first content and the second content simultaneously, wherein display styles for the first content and the second content are different (‘Description’ section: “This command is a way to visualize the "difference" between certain metrics among versions of the repository, by overlaying them in a single plot.”, ‘Examples’ section with logs.csv graph showing AUC for both committed (unmodified) and current (modified) versions, with each displayed using a different color).
DVC and the combination of Chan in view of Yang and W3Schools and further in view of Lee are both analogous to the claimed invention because they are in the same field of generating data visualizations. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Chan in view of Yang and W3Schools and further in view of Lee with the teachings of DVC to incorporate a version control system to track, store, recall, and display changes made between different versions of the data visualization system. The motivation would have been to expand the system’s capabilities, allowing a user to compare versions of an input data table as taught by DVC, or to roll back to a previous version as is common functionality for version control systems.
Regarding claim 113, the combination of Chan in view of Yang and W3Schools and further in view of Lee teaches the display apparatus according to claim 111, as well as obtaining writing trajectory (Chan [0072] “After the table is drawn, the user may also click on an empty field to edit the selected field in the hand-writing manner. The processing unit 110 recognizes and converts contents from hand-written contents into the digitized contents, and displays the converted digitized contents in the selected field in the display unit 120.”).
The combination of Chan in view of Yang and W3Schools and further in view of Lee does not explicitly teach: wherein after obtaining and storing the writing trajectory information after modification in a first storage zone, the processor is further configured for:
obtaining the writing trajectory information after modification from the first storage zone, drawing a first graph according to the writing trajectory information before modification and the writing trajectory information after modification;
wherein the first graph comprises first table data corresponding to the writing trajectory information before modification, and second table data corresponding to the writing trajectory information after modification;
or,
obtaining the writing trajectory information after modification from the first storage zone, drawing a second graph according to the writing trajectory information before modification, and drawing a third graph according to the writing trajectory information after modification;
displaying the second graph and the third graph simultaneously.
DVC teaches: wherein after obtaining and storing the writing trajectory information after modification in a first storage zone (‘Description’ section: “revisions are Git commit hashes, tags, or branch names. If none are specified, dvc plots diff compares plots currently present in the workspace (uncommitted changes) with their latest commit (required). A single specified revision results in comparing the workspace and that version.”, where the latest commit is the content before modification and the current workspace version with uncommitted changes is the version after modification; Git can store and retrieve data from each commit), the processor is further configured for:
obtaining the writing trajectory information after modification from the first storage zone (‘Examples’ section, first example; logs.csv files for both committed (unmodified) and current (modified) versions are used as input), drawing a first graph according to the writing trajectory information before modification and the writing trajectory information after modification (‘Examples’ section with logs.csv graph showing AUC for both committed (HEAD; unmodified) and current (workspace; modified) versions);
wherein the first graph comprises first table data corresponding to the writing trajectory information before modification, and second table data corresponding to the writing trajectory information after modification (input file ‘logs.csv’ for each commit is stored in tabular CSV format);
or,
obtaining the writing trajectory information after modification from the first storage zone (Confusion matrix example section; classes.csv files for both committed (HEAD; unmodified) and current (workspace; modified) versions are used as input), drawing a second graph according to the writing trajectory information before modification (Confusion matrix example section; “HEAD” matrix (left)), and drawing a third graph according to the writing trajectory information after modification (Confusion matrix example section; “workspace” matrix (right));
displaying the second graph and the third graph simultaneously (Confusion matrix example section; “HEAD” and “workspace” confusion matrices are displayed simultaneously).
DVC and the combination of Chan in view of Yang and W3Schools and further in view of Lee are both analogous to the claimed invention because they are in the same field of generating data visualizations. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Chan in view of Yang and W3Schools and further in view of Lee with the teachings of DVC to incorporate a version control system to track, store, recall, and display changes in input data made by a user across different versions, and to generate charts to visualize the changes. The motivation would have been to expand the functionality of the invention by adding a user-friendly manner of easy comparison between different data versions in situations where which multiple sets of input data either may or may not be logically suitable for display on a single graph, shown in the two different examples taught by DVC.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to BENJAMIN STATZ whose telephone number is (571)272-6654. The examiner can normally be reached Mon-Fri 8am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tammy Goddard can be reached at (571)272-7773. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/BENJAMIN TOM STATZ/ Examiner, Art Unit 2611
/TAMMY PAIGE GODDARD/ Supervisory Patent Examiner, Art Unit 2611