Prosecution Insights
Last updated: April 19, 2026
Application No. 19/212,928

DESCRIPTIVE INSIGHT GENERATION AND PRESENTATION SYSTEM

Final Rejection §103§DP
Filed
May 20, 2025
Examiner
BARNES JR, CARL E
Art Unit
2178
Tech Center
2100 — Computer Architecture & Software
Assignee
Microsoft Technology Licensing, LLC
OA Round
2 (Final)
32%
Grant Probability
At Risk
3-4
OA Rounds
4y 4m
To Grant
57%
With Interview

Examiner Intelligence

Grants only 32% of cases
32%
Career Allow Rate
65 granted / 202 resolved
-22.8% vs TC avg
Strong +25% interview lift
Without
With
+25.2%
Interview Lift
resolved cases with interview
Typical timeline
4y 4m
Avg Prosecution
32 currently pending
Career history
234
Total Applications
across all art units

Statute-Specific Performance

§101
14.3%
-25.7% vs TC avg
§103
62.6%
+22.6% vs TC avg
§102
9.0%
-31.0% vs TC avg
§112
8.7%
-31.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 202 resolved cases

Office Action

§103 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment Claims 21-40 were previously pending and subject to a non-final action 10/27/2025. In the response filed on 12/16/2025, claims 21, 35, 39 and 40 were amended claim 27 was canceled and claim 41 was added. Therefore, claims 21-26 and 28-41 are currently pending and subject to the final action below. Response to Arguments Applicant’s arguments filed 12/16/2025, with respects to claims 21-23, 26-27, 35-39, 40 under Double Patenting have been fully considered but is maintained. Applicants acknowledges the double patenting rejection and requested the rejection be held in abeyance because no claim in the present application is currently allowable. Applicant's arguments filed 12/16/2025, 21-40 under 35 U.S.C. 103 have been fully considered but they are not persuasive. Applicant’s argument: Applicant submits that the cited references do not disclose or suggest at least the above- emphasized features of claim 21 as amended. To wit, Lecue teaches determining topics for a set of data using a knowledge graph having nodes each representing a topic and edges each representing relationships between the topics. Lecue, [0020]. Thus, while Lecue teaches identifying topics using domain knowledge graphs, Lecue does not teach comparing text elements extracted from a digital visual graph image to text of one or more areas of domain knowledge. Therefore, claim 21 is believed to be allowable over the cited references. Independent claims 35 and 40 are amended to include one or more elements that are the same as or similar to those elements amended into claim 21. Accordingly, Applicant submits that claims 35 and 40 are allowable over the cited references for reasons similar to those discussed above with respect to claim 21. Examiner response: After careful consideration and review of applicant’s arguments. The examine respectfully disagrees that the amendments overcome the prior art of record for the following reasons below. Can teaches: relationships based on domain knowledge (Can − [0161-0164] Fig. 11A) but does not explicitly teach: wherein generating the recognized text elements comprises determining a domain for the digital visual graph by comparing the extracted text elements to text of one or more areas of domain knowledge. However, Lecue teaches: wherein generating the recognized text elements comprises determining a domain for the digital visual graph by comparing the extracted text elements to text of one or more areas of domain knowledge. (Lecue − [0004] determine a represented set of data for a first set of topics of a plurality of topics of the data input based on a domain knowledge graph of the plurality of topics [0020] The domain knowledge graph may include a knowledge graph of known or recorded topics of a particular domain, with each topic being a node on the domain knowledge graph, and edges (links) between the topics corresponding to relationships between the respective topics. [0023] domain knowledge graph are shown as relevant topics associated with the data. [0031] For example, the class balance identifier may compare the topics of the data input and/or topics related to the data input (e.g., those topics within a certain edge distance of the topics of the data input) to each other.) Lecue teaches compares domain knowledge to each other. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 21-23, 26-27, 35-39, 40 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-2, 3, 9-10, and 17-18 of U.S. Patent No. 12,339,868 B2. Although the claims at issue are not identical, they are not patentably distinct from each other. Claims 21-23, 26-27, 35-39, 40 of instant app# 19/212,928 as being anticipatory over claims 1-2, 3, 9-10, and 17-18 of Patent No. 12,339,868 B2. US PAT: 12,339,868 B2 Instant App: 19/212,928 1. A system, comprising: one or more processors; one or more memory devices that store program code to be executed by the one or more processors, the program code comprising: 2. wherein the digital visual graph to dataset converter comprises: a second machine learning model configured to identify a type of the digital visual graph; a text extractor configured to extract text elements in the digital visual graph; a text locator configured to locate each of the text elements relative to a coordinate system for the digital visual graph; a text recognizer configured to recognize the text elements; 18. recognizing the text elements by comparing the text elements to one or more domain areas of domain knowledge a parameter type identifier configured to identify a parameter type of each of the text elements; a visual image scanner configured to scan the digital visual graph horizontally and vertically based on the identified type of the digital visual graph and measure a magnitude and location of each of the output parameters of the digital visual graph relative to the coordinate system; and a structured dataset generator configured to generate the structured dataset by associating the text elements, the location of each of the text elements relative to the coordinate system, the parameter type of each of the text elements, and the magnitude and location of each of the output parameters relative to the coordinate system. 21. (New) A system comprising: a processor; and memory comprising computer executable instructions that perform operations comprising: identifying a graph type of a digital visual graph; generating extracted text elements by extracting text elements from the digital visual graph; locating the text elements relative to a coordinate system for the digital visual graph; generating recognized text elements by executing a text recognizer on the text elements; wherein the text recognizer determines a domain for the digital visual graph by comparing the extracted text elements to text of one or more areas of domain knowledge; identifying a set of parameter types comprising a parameter type of each recognized text element of the recognized text elements; scanning the digital visual graph based on the graph type to measure a location and a magnitude of output data illustrated in the digital visual graph relative to the coordinate system; and based on scanning the digital visual graph, generating a structured dataset comprising data elements corresponding to the digital visual graph. 3. The system of claim 2, wherein the second machine learning model is trained utilizing bar graphs, line graphs, or pie charts and respective bar graph, line graph, and pie chart type identifiers. 22. (New) The system of claim 21, wherein identifying the graph type comprises utilizing a machine learning model to identify the graph type. 3. The system of claim 2, wherein the second machine learning model is trained utilizing bar graphs, line graphs, or pie charts and respective bar graph, line graph, and pie chart type identifiers. 23. (New) The system of claim 21, wherein the graph type is one of: a bar graph; a line graph; or a pie chart. 18. locating each of the text elements relative to a coordinate system for the digital visual graph, the coordinate system indicating pixel coordinates of the text elements; 26. (New) The system of claim 21, wherein locating the text elements relative to the coordinate system comprises identifying a location of pixels representing each of the text elements, the location of the pixels being specified by coordinates of the coordinate system. 18. recognizing the text elements by comparing the text elements to one or more domain areas of domain knowledge 27. (New) The system of claim 21, wherein generating the recognized text elements comprises determining a domain for the digital visual graph by comparing the extracted text elements to text of one or more areas of domain knowledge 10. scanning the digital visual graph horizontally and vertically based on the identified type of the digital visual graph 31. (New) The system of claim 21, wherein scanning the digital visual graph comprises scanning the digital visual graph horizontally and vertically based on the graph type of the digital visual graph. Claim 2 Claim 35 Claim 2 Claim 40 Although the claims at issue are not identical, they are not patentably distinct from each other. Claims 21-23, 26-27, 35-39, 40 of instant app# 19/212,928 as being anticipatory over claims 1-2, 3, 9-10, and 17-18 of Patent No. 12,339,868 B2. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 21-26, and 28-40 are rejected under 35 U.S.C. 103 as being unpatentable over Can (USPGPUB: 20170371856) in view of Appel (USPGPUB: 20170185835) further in view of Lecue (USPGPUB: 20200050946, Filed Date Aug. 9, 2018).. Regarding independent claim 21, Can teaches: A system comprising: a processor; (Can − [0004] Various embodiments described herein may include an apparatus comprising a processor) and memory comprising computer executable instructions that perform operations comprising: (Can − [0004] Various embodiments described herein may include an apparatus comprising a processor and a storage to store instructions that, when executed by the processor, may cause the processor to perform operations comprising identifying a graph type of a digital visual graph; Can − [0004] − determine a set of graph-type correlation scores for the graph image, the set of graph-type correlation scores to include a graph-type correlation score for each graph type of a plurality of graph types. [0188] In one or more embodiments, PGS 1202 may identify a data visualization comprising a graph image. In one or more embodiments, PGS 1202 may evaluate the set of graph-type correlation scores to identify a graph type of the graph image. [0203] In various embodiments, PGS 1202 may initially support a set of initial graph types, such as linear graphs and bar graphs. However, in some embodiments, the flexible and modular design of PGS 1202 may support learning further graph types.) generating extracted text elements by extracting text elements from the digital visual graph; (Can − [0005] the processor of the apparatus may be caused to perform operations comprising one or more of: detect a portion of the graph image with contextual information; extract a textual element from the portion of the graph image with contextual information; [0053] [0219] FIG. 16 illustrates an example processing flow of a context extractor. Context extractor 1258 may utilize optical character recognition (OCR) and/or computer vision. Embodiments are not limited in this context.) locating the text elements relative to a coordinate system for the digital visual graph; (Can − [0219] FIG. 16 illustrates an example of a processing flow 1600 of context extractor 1258 that may be representative of various embodiments. In one or more such embodiments, know the graph type may provide prior knowledge of the components of an image, such as the location of axes, and data labels. In various embodiments, context extractor 1258 may utilize optical character recognition (OCR) and/or computer vision. Embodiments are not limited in this context.) generating recognized text elements by executing a text recognizer on the text elements; (Can − [0053] [0219] FIG. 16 illustrates an example processing flow of a context extractor [0219] In various embodiments, context extractor 1258 may utilize optical character recognition (OCR) and/or computer vision. Embodiments are not limited in this context.) identifying a set of parameter types comprising a parameter type of each recognized text element of the recognized text elements; (Can − [0071] For example, after being processed, the unstructured time stamped data may be aggregated by time (e.g., into daily time period units) to generate time series data and/or structured hierarchically (structured dataset) according to one or more dimensions (e.g., parameters, attributes, and/or variables [0136] a collection of event objects may include one or more fields designated as primary identifiers (ID) for the event objects). scanning the digital visual graph based on the graph type to measure a location (Can − [0219] FIG. 16 illustrates an example of a processing flow 1600 of context extractor 1258 that may be representative of various embodiments. In one or more such embodiments, know the graph type may provide prior knowledge of the components of an image, such as the location of axes, and data labels. [0243] Examples of such input devices include, scanners) and based on scanning the digital visual graph, generating a structured dataset comprising data elements corresponding to the digital visual graph. (Can − [0219] FIG. 16 illustrates an example of a processing flow 1600 of context extractor 1258 that may be representative of various embodiments. In one or more such embodiments, know the graph type may provide prior knowledge of the components of an image, such as the location of axes, and data labels. In various embodiments, context extractor 1258 may utilize optical character recognition (OCR) and/or computer vision. Embodiments are not limited in this context.) Can does not explicitly teach: and a magnitude of output data illustrated in the digital visual graph relative to the coordinate system; However, Appel teaches: and a magnitude of output data illustrated in the digital visual graph relative to the coordinate system; (Appel − OCR the values in the bars by comparing the size (magnitude) of the bars in pixels. Company X and Company Y representing output parameters. [0054] Referring now to FIG. 6, there illustrated is a graph or chart 300 having a number of bar charts for use in an exemplary embodiment of the present invention. The system or method detects that this is a bar chart, for example, by making use of an image classification algorithm. The system or method then interprets the information in the chart, such as the meaning of the x and y axis and the legend using, e.g., OCR, and the corresponding values in the bars by comparing the size of the bars in pixels with the values in the y axis, and extracts the amount of profit of companies X and Y over three consecutive years (i.e., from 2012 through 2014).) Accordingly, it would have been obvious to one of ordinary skill in the art before effective filing date of the claim invention to have combined the teaching of Can, and Appel as all invention relate to data analysis of graph images and mapping graph image data into structure data. Adding the teaching of Appel provides Can with image classification algorithm for determining corresponding values within bars by determining pixel positions. One would have been motivated to make such combination to improve textual summarization of data visualization to be communicated to a visually impaired person, as taught by Can. Can teaches: relationships based on domain knowledge (Can − [0161-0164] Fig. 11A) but does not explicitly teach: wherein generating the recognized text elements comprises determining a domain for the digital visual graph by comparing the extracted text elements to text of one or more areas of domain knowledge. However, Lecue teaches: : wherein generating the recognized text elements comprises determining a domain for the digital visual graph by comparing the extracted text elements to text of one or more areas of domain knowledge. (Lecue − [0004] determine a represented set of data for a first set of topics of a plurality of topics of the data input based on a domain knowledge graph of the plurality of topics [0020] The domain knowledge graph may include a knowledge graph of known or recorded topics of a particular domain, with each topic being a node on the domain knowledge graph, and edges (links) between the topics corresponding to relationships between the respective topics. [0023] domain knowledge graph are shown as relevant topics associated with the data. [0031] For example, the class balance identifier may compare the topics of the data input and/or topics related to the data input (e.g., those topics within a certain edge distance of the topics of the data input) to each other.) Accordingly, it would have been obvious to one of ordinary skill in the art before effective filing date of the claim invention to have combined the teaching of Can, Appel and Lecue as all invention relate to data analysis of graph images and mapping graph image data into structure data. Adding the teaching of Lecue provides Can a semantic analyzer to obtain a domain knowledge graph to identify topics associate with structure data. One would have been motivated to make such combination to improve textual summarization of data visualization to be communicated to a visually impaired person, as taught by Can. Regarding dependent claim 22, depends on claim 21, Can teaches: wherein identifying the graph type comprises utilizing a machine learning model to identify the graph type. (Can − [0004] − determine a set of graph-type correlation scores for the graph image, the set of graph-type correlation scores to include a graph-type correlation score for each graph type of a plurality of graph types. [0159] Different machine-learning models may be used interchangeably to perform a task. [0163] machine-learning model is trained using the training data. The desired output may be a scalar, a vector, or a different type of data structure such as text or an image. [0188] In one or more embodiments, PGS 1202 may identify a data visualization comprising a graph image. In one or more embodiments, PGS 1202 may evaluate the set of graph-type correlation scores to identify a graph type of the graph image. [0203] In various embodiments, PGS 1202 may initially support a set of initial graph types, such as linear graphs and bar graphs. However, in some embodiments, the flexible and modular design of PGS 1202 may support learning further graph types.) Regarding dependent claim 23, depends on claim 21, Can teaches: wherein the graph type is one of: a bar graph; a line graph; or a pie chart. (Can − [0004] − determine a set of graph-type correlation scores for the graph image, the set of graph-type correlation scores to include a graph-type correlation score for each graph type of a plurality of graph types. [0159] Different machine-learning models may be used interchangeably to perform a task. [0163] [0188] [0203] In various embodiments, PGS 1202 may initially support a set of initial graph types, such as linear graphs and bar graphs. However, in some embodiments, the flexible and modular design of PGS 1202 may support learning further graph types.) Regarding dependent claim 24, depends on claim 21, Can teaches: wherein the graph type indicates a particular number of input and a particular number of outputs for the digital visual graph. (Can – [0071] data may be stored in a hierarchical data structure, such as tabular form [0004-0005] detect a portion of the graph image with contextual information; extract a textual element from the portion of the graph image with contextual information; and insert at least a portion of the textual element extracted from the portion of the graph image with contextual information into at least one text template of the one or more text templates to generate the textual description of the graph image. [0197] . As shown in FIG. 13C, original image 1322 of graph image. [0071] The unstructured data (data visualization i.e. graph) may be presented to the computing environment 114 in different forms such as a flat file or a conglomerate of data records, and may have data values and accompanying time stamps. The computing environment 114 may be used to analyze the unstructured data in a variety of ways to determine the best way to structure (e.g., hierarchically) that data, such that the structured data is tailored to a type of further analysis that a user wishes to perform on the data.) Regarding dependent claim 25, depends on claim 21, Can teaches: wherein extracting the text elements comprises detecting at least one of: x-axis text of the digital visual graph; y-axis text of the digital visual graph; header text of the digital visual graph; or footer text of the digital visual graph. (Can − [0004-0005] detect a portion of the graph image with contextual information; extract a textual element from the portion of the graph image with contextual information; and insert at least a portion of the textual element extracted from the portion of the graph image with contextual information into at least one text template of the one or more text templates to generate the textual description of the graph image. [0071] [0197] . As shown in FIG. 13C, original image 1322 of graph image. As shown in FIG. 13C, x axis of graph data are input attributes and y axis of graph data are output attributes) Regarding dependent claim 26, depends on claim 21, Can does not explicitly teach: identifying a location of pixels representing each of the text elements, the location of the pixels being specified by coordinates of the coordinate system However, Appel teaches: wherein locating the text elements relative to the coordinate system comprises identifying a location of pixels representing each of the text elements, the location of the pixels being specified by coordinates of the coordinate system. (Appel − OCR the values in the bars by comparing the size (magnitude) of the bars in pixels. Company X and Company Y representing output parameters. [0054] Referring now to FIG. 6, there illustrated is a graph or chart 300 having a number of bar charts for use in an exemplary embodiment of the present invention. The system or method detects that this is a bar chart, for example, by making use of an image classification algorithm. The system or method then interprets the information in the chart, such as the meaning of the x and y axis and the legend using, e.g., OCR, and the corresponding values in the bars by comparing the size of the bars in pixels with the values in the y axis, and extracts the amount of profit of companies X and Y over three consecutive years (i.e., from 2012 through 2014).) Accordingly, it would have been obvious to one of ordinary skill in the art before effective filing date of the claim invention to have combined the teaching of Can, Appel and Lecue as all invention relate to data analysis of graph images and mapping graph image data into structure data. Adding the teaching of Lecue provides Can a semantic analyzer to obtain a domain knowledge graph to identify topics associate with structure data. One would have been motivated to make such combination to improve textual summarization of data visualization to be communicated to a visually impaired person, as taught by Can. Regarding dependent claim 28, depends on claim 21, Can teaches: wherein identifying the set of parameter types comprises determining whether each extracted text element of the extracted text elements represents a known value type. (Can − [0005] the processor of the apparatus may be caused to perform operations comprising one or more of: detect a portion of the graph image with contextual information; extract a textual element from the portion of the graph image with contextual information; [0053] [0219] FIG. 16 illustrates an example processing flow of a context extractor. Context extractor 1258 may utilize optical character recognition (OCR) and/or computer vision. Embodiments are not limited in this context.) Regarding dependent claim 29, depends on claim 21, Can teaches: wherein the known value type comprises: a date; a time; or a count. (Can − [0071] For example, after being processed, the unstructured time stamped data may be aggregated by time (e.g., into daily time period units) to generate time series data and/or structured hierarchically (structured dataset) according to one or more dimensions (e.g., parameters, attributes, and/or variables [0136] a collection of event objects may include one or more fields designated as primary identifiers (ID) for the event objects). Regarding dependent claim 30, depends on claim 21, Can teaches: wherein the known value type comprises: a temperature; an x-axis variable; or a y-axis variable. (Can − [0004-0005] detect a portion of the graph image with contextual information; extract a textual element from the portion of the graph image with contextual information; and insert at least a portion of the textual element extracted from the portion of the graph image with contextual information into at least one text template of the one or more text templates to generate the textual description of the graph image. [0071] [0197] . As shown in FIG. 13C, original image 1322 of graph image. As shown in FIG. 13C, x axis of .graph data are input attributes and y axis of graph data are output attributes) Regarding dependent claim 31, depends on claim 21, Can teaches: wherein scanning the digital visual graph comprises scanning the digital visual graph horizontally and vertically based on the graph type of the digital visual graph. (Can − [0219] FIG. 16 illustrates an example of a processing flow 1600 of context extractor 1258 that may be representative of various embodiments. In one or more such embodiments, know the graph type may provide prior knowledge of the components of an image, such as the location of axes, and data labels. [0243] Examples of such input devices include, scanners) Regarding dependent claim 32, depends on claim 21, Can does not explicitly teach: detecting pixel coordinates of input parameters in the output data, However, Appel teaches: wherein measuring a location of the output data comprises detecting pixel coordinates of input parameters in the output data, the input parameters representing x-axis data of the digital visual graph. (Appel − OCR the values in the bars by comparing the size (magnitude) of the bars in pixels. Company X and Company Y representing output parameters. [0054] Referring now to FIG. 6, there illustrated is a graph or chart 300 having a number of bar charts for use in an exemplary embodiment of the present invention. The system or method detects that this is a bar chart, for example, by making use of an image classification algorithm. The system or method then interprets the information in the chart, such as the meaning of the x and y axis and the legend using, e.g., OCR, and the corresponding values in the bars by comparing the size of the bars in pixels with the values in the y axis, and extracts the amount of profit of companies X and Y over three consecutive years (i.e., from 2012 through 2014).) Accordingly, it would have been obvious to one of ordinary skill in the art before effective filing date of the claim invention to have combined the teaching of Can, Appel and Lecue as all invention relate to data analysis of graph images and mapping graph image data into structure data. Adding the teaching of Lecue provides Can a semantic analyzer to obtain a domain knowledge graph to identify topics associate with structure data. One would have been motivated to make such combination to improve textual summarization of data visualization to be communicated to a visually impaired person, as taught by Can. Regarding dependent claim 33, depends on claim 21, Can does not explicitly teach: detecting pixel coordinates of input parameters in the output data, However, Appel teaches: wherein measuring a magnitude of the output data comprises detecting pixel coordinates of output parameters in the output data, the output parameters representing y-axis data of the digital visual graph. (Appel − OCR the values in the bars by comparing the size (magnitude) of the bars in pixels. Company X and Company Y representing output parameters. [0054] Referring now to FIG. 6, there illustrated is a graph or chart 300 having a number of bar charts for use in an exemplary embodiment of the present invention. The system or method detects that this is a bar chart, for example, by making use of an image classification algorithm. The system or method then interprets the information in the chart, such as the meaning of the x and y axis and the legend using, e.g., OCR, and the corresponding values in the bars by comparing the size of the bars in pixels with the values in the y axis, and extracts the amount of profit of companies X and Y over three consecutive years (i.e., from 2012 through 2014).) Accordingly, it would have been obvious to one of ordinary skill in the art before effective filing date of the claim invention to have combined the teaching of Can, Appel and Lecue as all invention relate to data analysis of graph images and mapping graph image data into structure data. Adding the teaching of Lecue provides Can a semantic analyzer to obtain a domain knowledge graph to identify topics associate with structure data. One would have been motivated to make such combination to improve textual summarization of data visualization to be communicated to a visually impaired person, as taught by Can. Regarding dependent claim 34, depends on claim 21, Can teaches: wherein generating a structured dataset comprises associating: the recognized text elements; (Can − [0219] FIG. 16 illustrates an example of a processing flow 1600 of context extractor 1258 that may be representative of various embodiments. In one or more such embodiments, know the graph type may provide prior knowledge of the components of an image, such as the location of axes, and data labels. In various embodiments, context extractor 1258 may utilize optical character recognition (OCR) and/or computer vision. Embodiments are not limited in this context.) the parameter type of each of the recognized text elements; the location of the output data relative to the coordinate system; (Can − [0219] FIG. 16 illustrates an example of a processing flow 1600 of context extractor 1258 that may be representative of various embodiments. In one or more such embodiments, know the graph type may provide prior knowledge of the components of an image, such as the location of axes, and data labels. In various embodiments, context extractor 1258 may utilize optical character recognition (OCR) and/or computer vision. Embodiments are not limited in this context. [0222] In a further example, a graph image with revenue on the y-axis and year on the x-axis may be received for summarization. In such examples, context extractor 1258 may identify that the values on the y-axis may include a dollar sign (e.g., `S`) and the values on the x-axis may be four digit numbers (e.g., 2014, 2015, 2016). Based on this information, context extractor 1258 may determine that the y-axis represents some type of resource information and the x-axis represents the year. In various embodiments, context extractor 1258 may identify a title of `Revenue v. Year` in the graph image) Can does not explicitly teach: pixel locations of each of the recognized text elements relative to the coordinate system; and the magnitude of the output data relative to the coordinate system. However, Appel teaches: pixel locations of each of the recognized text elements relative to the coordinate system; and the magnitude of the output data relative to the coordinate system. (Appel − OCR the values in the bars by comparing the size (magnitude) of the bars in pixels. Company X and Company Y representing output parameters. [0054] Referring now to FIG. 6, there illustrated is a graph or chart 300 having a number of bar charts for use in an exemplary embodiment of the present invention. The system or method detects that this is a bar chart, for example, by making use of an image classification algorithm. The system or method then interprets the information in the chart, such as the meaning of the x and y axis and the legend using, e.g., OCR, and the corresponding values in the bars by comparing the size of the bars in pixels with the values in the y axis, and extracts the amount of profit of companies X and Y over three consecutive years (i.e., from 2012 through 2014).) Accordingly, it would have been obvious to one of ordinary skill in the art before effective filing date of the claim invention to have combined the teaching of Can, Appel and Lecue as all invention relate to data analysis of graph images and mapping graph image data into structure data. Adding the teaching of Lecue provides Can a semantic analyzer to obtain a domain knowledge graph to identify topics associate with structure data. One would have been motivated to make such combination to improve textual summarization of data visualization to be communicated to a visually impaired person, as taught by Can. Regarding independent claim 35, is directed to a method. Claim 35 have similar/same technical features/limitations as claim 21 and the claims are rejected under the same rationale. Regarding dependent claim 36, depends on claim 35, Can teaches: further comprising: providing the structured dataset to an insight generator component of a computing device; and generating descriptive insights for the structured dataset. (Can – [0024] the one or more text templates to include at least one portion of text associated with each detected pattern of the one or more detected patterns, each text template of the one or more text templates associated with a priority level; arrange the one or more text templates in an order based on the priority level associated with each text template to generate a textual description of the graph image; and generate a personalized summary of the graph image, the summary of the graph image comprising the graph image and the textual description of the graph image. [0060] In some embodiments, a personalized computer-generated narrative can be automatically generated for one or more data visualizations.) Regarding dependent claim 37, depends on claim 36, Can teaches: generating an automated descriptive insight report for the structured dataset based on the descriptive insights; and providing the automated descriptive insight report for display via an interface of the computing device. (Can – [0024] [0060] In some embodiments, a personalized computer-generated narrative can be automatically generated for one or more data visualizations. [0158] FIG. 11A is a flow chart of an example of a process for generating and using a machine-learning model according to some aspects. machine-learning models… cluster input data among two or more groups; predict a result based on input data; identify patterns or trends in input data; identify a distribution of input data in a space; or any combination of these. Examples of machine-learning models can include (i) neural networks… and (vi) ensembles or other combinations of machine-learning models. [0225] accordingly, in various embodiments, PGS 1202 may enable a to tailor the generated natural-text summary (e.g., personalized summary 1204) based on one or more user preferences. In one or more embodiments, the personalization capability for the ordering of text templates may be the same or similar to content-based filtering, such as in recommender systems.) Regarding dependent claim 38, depends on claim 35, Can teaches: wherein identifying the graph type comprises: providing the digital visual graph to a graph type identifier component comprising a machine learning model trained with known graph labels to identify types of digital visual graphs. (Can − [0004] − determine a set of graph-type correlation scores for the graph image, the set of graph-type correlation scores to include a graph-type correlation score for each graph type of a plurality of graph types. [0159] Different machine-learning models may be used interchangeably to perform a task. [0163] machine-learning model is trained using the training data. The desired output may be a scalar, a vector, or a different type of data structure such as text or an image. [0188] In one or more embodiments, PGS 1202 may identify a data visualization comprising a graph image. In one or more embodiments, PGS 1202 may evaluate the set of graph-type correlation scores to identify a graph type of the graph image. [0203] In various embodiments, PGS 1202 may initially support a set of initial graph types, such as linear graphs and bar graphs. However, in some embodiments, the flexible and modular design of PGS 1202 may support learning further graph types.) Regarding dependent claim 39, depends on claim 35, Can teaches: wherein identifying the parameter type of one or more of the text elements comprises (Can − [0071] For example, after being processed, the unstructured time stamped data may be aggregated by time (e.g., into daily time period units) to generate time series data and/or structured hierarchically (structured dataset) according to one or more dimensions (e.g., parameters, attributes, and/or variables [0136] a collection of event objects may include one or more fields designated as primary identifiers (ID) for the event objects) using domain knowledge associated with the output data to determine parameter attributes of the one or more text elements. (Can – [0161-0164] Fig. 11A Machine-learning models can be constructed through an at least partially automated (e.g., with little or no human involvement) process called training. During training, input data can be iteratively supplied to a machine-learning model to enable the machine-learning model to identify patterns related to the input data or to identify relationships between the input data and output data. In block 1108, the machine-learning model is evaluated. For example, an evaluation dataset can be obtained, for example, via user input or from a database. The evaluation dataset can include inputs correlated to desired outputs. [0216] Patterns obtained by the machine-learning models may be associated with an insight message and one more text templates.) Regarding independent claim 40, is directed to a device. Claim 40 have similar/same technical features/limitations as claim 21 and the claims are rejected under the same rationale. Claim(s) 41 is rejected under 35 U.S.C. 103 as being unpatentable over Can and Appel as applied to claim 40 above, and further in view of Ellis (USPGPUB: 20180131803 A1, Filed Date Apr. 13, 2017). Regarding dependent claim 41, depends on claim 40, Can does not explicitly teach: determines an importance of the text elements and an evaluation of an outcome indicated by the output data Ellis teaches: wherein the parameter types are identified based on the domain knowledge; and wherein the domain knowledge determines an importance of the text elements and an evaluation of an outcome indicated by the output data. (Ellis − [0037] The ordering or ranking of each insight object can be established based on a relevance score or relevance level to the user or target dataset. A relevance level might be determined using the aforementioned insight knowledge so that key insights relevant to the user are ranked higher according to an associated score than other insights. The insight knowledge applies data analysis preferences to determine what data analysis activities and insight objects or visualizations are likely to be important to a user or organization.) Accordingly, it would have been obvious to one of ordinary skill in the art before effective filing date of the claim invention to have combined the teaching of Can, Appel Lecue and Ellis as all invention relate to data analysis of graph images and mapping graph image data into structure data. Adding the teaching of Ellis provides Can a generating knowledge graph and identifying trends in dataset.. One would have been motivated to make such combination to improve textual summarization of data visualization to be communicated to a visually impaired person, as taught by Can. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to CARL E BARNES JR whose telephone number is (571)270-3395. The examiner can normally be reached Monday-Friday 9am-6pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Stephen Hong can be reached at (571) 272-4124. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CARL E BARNES JR/Examiner, Art Unit 2178 /STEPHEN S HONG/Supervisory Patent Examiner, Art Unit 2178
Read full office action

Prosecution Timeline

May 20, 2025
Application Filed
Oct 17, 2025
Non-Final Rejection — §103, §DP
Dec 09, 2025
Interview Requested
Dec 16, 2025
Response Filed
Dec 16, 2025
Applicant Interview (Telephonic)
Dec 17, 2025
Examiner Interview Summary
Feb 21, 2026
Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12584932
SLIDE IMAGING APPARATUS AND A METHOD FOR IMAGING A SLIDE
2y 5m to grant Granted Mar 24, 2026
Patent 12541640
COMPUTING DEVICE FOR MULTIPLE CELL LINKING
2y 5m to grant Granted Feb 03, 2026
Patent 12536464
SYSTEM FOR CONSTRUCTING EFFECTIVE MACHINE-LEARNING PIPELINES WITH OPTIMIZED OUTCOMES
2y 5m to grant Granted Jan 27, 2026
Patent 12530765
SYSTEMS AND METHODS FOR CALCIUM-FREE COMPUTED TOMOGRAPHY ANGIOGRAPHY
2y 5m to grant Granted Jan 20, 2026
Patent 12530523
METHOD, APPARATUS, SYSTEM, AND COMPUTER PROGRAM FOR CORRECTING TABLE COORDINATE INFORMATION
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
32%
Grant Probability
57%
With Interview (+25.2%)
4y 4m
Median Time to Grant
Moderate
PTA Risk
Based on 202 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month