Prosecution Insights
Last updated: April 19, 2026
Application No. 18/341,262

CLOSED-LOOP GENERATION OF INSIGHTS FROM SOURCE DATA

Non-Final OA §103
Filed
Jun 26, 2023
Examiner
MERCADO, GABRIEL S
Art Unit
2171
Tech Center
2100 — Computer Architecture & Software
Assignee
Qliktech International AB
OA Round
3 (Non-Final)
42%
Grant Probability
Moderate
3-4
OA Rounds
3y 1m
To Grant
69%
With Interview

Examiner Intelligence

Grants 42% of resolved cases
42%
Career Allow Rate
84 granted / 198 resolved
-12.6% vs TC avg
Strong +26% interview lift
Without
With
+26.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
43 currently pending
Career history
241
Total Applications
across all art units

Statute-Specific Performance

§101
12.7%
-27.3% vs TC avg
§103
47.2%
+7.2% vs TC avg
§102
11.6%
-28.4% vs TC avg
§112
23.3%
-16.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 198 resolved cases

Office Action

§103
DETAILED ACTION This office action is responsive to communication(s) filed on 2/6/2026. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 2/6/2026 has been entered. Claims Status Claims 1-20 are pending and are currently being examined. Claims 1 and 11 are independent, and are newly amended. Claim Interpretation Herein, “derivative data” is interpreted as any data that results from an operation/calculation performed on source (or “original”) data that satisfy a query, ¶ 128 (Instant Specification, as published). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-4, 8-14 and 18-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Pogrebtsov; Alexei et al. (hereinafter Pogrebtsov – US 20180349456 A1) in view of Sanossian; Hermineh (US 20210350068 A1). Independent Claim 1: Pogrebtsov teaches A method comprising, receiving, via a user interface (UI) output by a computing device, a query comprising at least one selection of at least one UI element of a plurality of UI elements (a method includes selecting fields [at least one selection of at least one UI element of a plurality of UI elements], Abstract and ¶ 51. the selection(s) are made on a displayed user interface [via a user interface (UI) output by a computing device], as shown in fig. 1B, and database is “queried” based on the selections, ¶¶ 70 and 125) associated with source data for an entity; (the fields are associated with data for a records/tables [an entity], ¶ 56) determining, based on the query and the source data, a metric associated with the entity; (a similarity score [metric] is determined between the records [associated with the entity] based on the selection(s), Abstract and ¶¶ 61 and 64-65) generating, based on the metric, a first UI object, wherein the first UI object is indicative of the metric based on the source data; (graphical object(s) [a first UI object] are generated based on the metric through adjustments [based on the metric…wherein the first UI object is indicative of the metric based on the source data], Abstract and ¶ 136) causing, based on imaging data and imaging metadata, the first UI object to be output at the UI, (the objects are displayed [output at the UI], e.g., as particles 122, Abstract and ¶ 51 and figs. 1B-1H. the display of the UI [output] is associated with imaging data at least in that the data is displayed in the GUI, figs. 1B-1H. the display is also associated with imaging metadata, e.g., such as color for the particles, ¶ 57 and fig. 1E, and distances between the particles, ¶¶ 52-53, 55 and 60 and figs. 1B-1C) wherein the output of the first UI object comprises a visual representation that is based on a layout associated with the imaging data and the imaging metadata, wherein the visual representation comprises a UI object based on the source data and another UI object based on derivative data, (discovery of relationships [derivative data] between data sets and/or between records of a data set [source data], ¶ 49, particles are arranged [based on a layout associated with imaging data] based on the selected fields on the discovered relationships [derivative data], as represented to a user by proximity of the objects to another, ¶¶ 50-51. the display is also associated with imaging metadata, e.g., such as color for the particles, ¶ 57 and fig. 1E, and distances between the particles, ¶¶ 52-53, 55 and 60 and figs. 1B-1C. particles [a UI object based on the source data] can be grouped as a result of analyzing the similarities between data [source data], Abstract and ¶¶ 59-60, and borders [another UI object based on derived data] are overlaid onto the canvas to provide a visual indication of the separation between the groups of particles, ¶ 58 and fig. 1I. Displayed borders are considered interface objects based on the similarity analysis [derived data] because they act as visual containers, or "dividers," that utilize the Gestalt principle of enclosure to explicitly separate, organize, and create a clear, actionable boundary between distinct groups of particles, and a person having ordinary skill in the art would have understood these borders. ) […] and wherein the layout includes an arrangement of areas corresponding to respective UI objects included in the visual representation, each area within the arrangement of areas having a placement within the layout; (borders overlaid on to the canvas to provide a visual indication of the separation between groups of particles [arrangement of areas], ¶ 58 and fig. 1I) receiving, via the computing device, input data associated with the entity, wherein at least a portion of the imaging data is based on the input data, (computations from an external engine like 114 to generate layouts is indicative of receiving and processing input data [e.g., user field selection(s)] associated with an entity, Abstract ¶¶ 92-97 and fig. 1A. This input data is based on the entity and its type, e.g., data element type, ¶ 123) wherein the input data is indicative of a type of entity consistent with the entity, (the input fields are of different types (such as nominal, ordinal, or quantitive), ¶ 56, fields can be numeric, categorical, date, time, geolocation, combinations thereof, and the like, ¶ 61) and wherein at least one portion of the input data differs from the source data; (input may include a request for external processing [input data differs from the source data], ¶ 115) and causing…[a] second UI object to be output at the UI. (the map displays multiple objects, e.g., multiple particles 122, ¶ 51 and fig. 1B. Furthermore, a user can change the set of selected fields and see the changes [second UI object] in real-time of the relative positions of particles, ¶ 66) Pogrebtsov further suggests: wherein the derivative data comprises a subset of the source data satisfying the query, (in one or more embodiments, Pogrebtsov suggests this limitation by describing that display objects can represent numeric fields, ¶ 137, a determination is mentioned of a maximum or a minimum value in the source data [derived data, as a subset of source data], ¶ 105, one or more characteristics of the one or more records can be displayed, ¶¶ 93 and 138, e.g., in multiple different data display areas for graphical objects representing data analysis, ¶ 126, e.g., in histograms [derived data] displayed together with the particles [based from source data], ¶ 58 and fig. 1H. Under the broadest reasonable interpretation, the Instant Specification’s example of operating on the source data to find an extremum [e.g., maximum or minimum values], ¶ 128 (as published), is reflective of an example of derivative data comprises a subset of the source data satisfying the query.) Accordingly, it would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to modify the method of Pogrebtsov to include wherein the derivative data comprises a subset of the source data satisfying the query, as suggested by one or more embodiments of Pogrebtsov. One would have been motivated to make such a combination in order to provide the ability to display the results of any calculation/determination known in the art, including a determination of a maximum or minimum value of the data set, as reflected in Pogrebtsov ¶¶ 58, 93, 105, 126, 128, 137, 138 and Pogrebtsov fig. 1H. As mentioned above, Pogrebtsov teaches external engine processing based on inputs, Abstract ¶¶ 92-97 and 123, and fig. 1A, including input data differs from the source data, ¶ 115. Pogrebtsov further describes that “the methods and systems can employ Artificial Intelligence techniques such as machine learning”, ¶ 154, but falls short of expressly teaching, but Sanossian teaches/suggests: generating, via a machine-learning model, based on the imaging data and the imaging metadata, and based on the input data, a second UI object, (A digital visual graph converter utilizes machine learning to identify graph types and employs modules to extract, locate, and recognize text elements and parameters. A visual scanner measures data magnitudes, enabling a generator to create a structured dataset by associating text, locations, and data points within a coordinate system, ¶ 6. Here, the image data includes text elements extracted from the visual graph and the magnitude of output data illustrated within the graph, and Image metadata encompasses the type of the digital visual graph, the location of text elements within a coordinate system, the parameter type of each text element, and the location of output data relative to that coordinate system. The machine model is used to generate descriptive insight report including one or more elements [a second UI object] based on this information and the insight report is displayed the user, ¶¶ 67, 94, 96, 98 and 100. The insight report is based on templates selected based on different scenarios and parameters, ¶ 61, which is necessarily reflected in the parameters/features of the graph’s image. The machine learning model can be the digital visual graph to dataset converter 106 described in ¶ 6 and fig. 2A, which includes the visual image scanner and generator discussed above) wherein the second UI object is indicative of the metric based on the input data, (the insight report will reflect graph metrics retrieved based on the input data, e.g., graph images, ¶¶ 6, 67, 94, 96, 98 and 100) and wherein the second UI object differs from the first UI object based on the at least one portion of the input data that differs from the source data (based analysis of the original graph, an insight report is generated [second UI object], which is different from the first UI object, e.g., the insight report is in a form suited for perception by a visually impaired user, Abstract and ¶ 5. In applying this to Pogrebtsov, the machine learning model can be the external engine, and insight report can be generated based on input indicating a request for external input [based on the at least one portion of the input data that differs from the source data], as explained above for Pogrebtsov) wherein the machine-learning model is trained based on a group of annotated image assets, (a machine learning model is trained on images and feature vectors for the images, ¶ 99. The feature vectors and corresponding images are considered annotated image assets because the feature set extractor associates raw images of descriptive insight reports with specific, predefined labels to create structured, labeled training data used to train the machine learning model) each associated with a placement and a prominence for one or more graphical objects in that annotated image asset; (The vectors are of images and corresponding labels, ¶ 99, and are images of graphs, such as bar and line graphs, ¶ 120. The graphs/images are each associated with a placement and a prominence for one or more graphical objects in that annotated image asset, because based on the images of the graphs, an insight generator identifies, evaluates, and prioritizes data patterns (such as trends, anomalies, or peaks) in a structured dataset, which directly informs the placement and prominence of graphical objects by determining which elements are highlighted as important or favorable in the final report, ¶ 36. The same done by identify positional and magnitude information from the graphs to identify relationships between the elements, ¶ 60, including magnitude [prominence] and location [placement], ¶ 49, e.g., height and position of bars in a bar chart, ¶ 39.) determining, based on the layout for the first UI object, a prominence for the second UI object within the layout and a placement for the second UI object within the layout and that the output is “based on the prominence and placement for the second UI object” (the machine learning model generates a more concise user interface to display important insights, ¶ 67. Here, because the insights are generated [output] by prioritizing data patterns (such as trends, anomalies, or peaks) in a structured dataset, which directly informs the placement and prominence of graphical objects by determining which elements are highlighted as important or favorable in the final report, ¶ 36) Accordingly, it would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to further modify the method of Pogrebtsov to include generating, via a machine-learning model, based on the imaging data and the imaging metadata, and based on the input data, a second UI object, wherein the second UI object is indicative of the metric based on the input data, and wherein the second UI object differs from the first UI object based on the at least one portion of the input data that differs from the source data, wherein the machine-learning model is trained based on a group of annotated image assets, each associated with a placement and a prominence for one or more graphical objects in that annotated image asset; determining, based on the layout for the first UI object, a prominence for the second UI object within the layout and a placement for the second UI object within the layout that the output is “based on the prominence and placement for the second UI object”, as taught/suggested by Sanossian. One would have been motivated to make such a combination in order to improve the usefulness and functionalities of the method to include automatically generating and rendering insights from visual graphs, without requiring the user to visually analyze the original graphs, and by producing insights in a form suited for perception by visually impaired users, Sanossian ¶¶ 3 and 30. Claim 2: The rejection of claim 1 is incorporated. Pogrebtsov further teaches: wherein the source data comprises a plurality of fields, (data in the table [source data] are comprised of multiple columns [a plurality of fields], ¶ 73 and fig. 2) and wherein the at least one UI element is associated with at least one field of the plurality of fields. (selecting fields [at least one UI element… associated with at least one field of the plurality of fields], Abstract and ¶ 51 and fig. 1B) Claim 3: The rejection of claim 2 is incorporated. Pogrebtsov further teaches: wherein the metric comprises an aggregation of data records within the source data based on the at least one field. (the objects are displayed based on similarity score, ¶ 4 and the objects are displayed based on aggregation of objects, ¶¶ 91 and 109, for selected fields, ¶ 62) Claim 4: The rejection of claim 1 is incorporated. Pogrebtsov further teaches: wherein the at least one portion of the input data that differs from the source data is associated with the at least one field and is not present in the source data. (the input data that differs from the source data is associated with the at least one field, because the external engine will provide feedback data based on the field selections, and “external” input selection is not part of the selected fields, ¶ 133 and fig. 10) Claim 8: The rejection of claim 1 is incorporated. Pogrebtsov further teaches: wherein causing the first UI object to be output at the UI comprises determining the layout for the first UI object. (And objects, e.g., particles, are placed in the visualization with a specific size/color [layout], ¶ 57 and 59, and based on relative positions to other particles, Pogrebtsov ¶ 66, e.g., based on similarity of scores, Pogrebtsov ¶ 139) Claim 9: The rejection of claim 1 is incorporated. Pogrebtsov-Sanossian, further teaches: further comprising: training, based on a group of annotated image assets, the machine-learning model to identify: a type of graphical object within a digital image corresponding to the imaging data, (the converter 106 of the machine model identifies a type of graph, and different types of elements within those graphs, such as headers and footers, ¶ 41) a placement of the graphical object within a layout, and a prominence of the graphical object within the layout, wherein each annotated image asset, of the group of annotated image assets, is associated with a label or a multi-dimensional tuple of labels indicating a type, a placement, and a prominence for one or more graphical objects in that annotated image asset. (Sanossian teaches that to generate template-based insight reports, machine learning model each image and/or related feature vector [each annotated image asset] must encode specific, actionable attributes [associated with a label] about graphical objects—such as their type, spatial placement, and visual prominence—because these precise, high-level structural and spatial data points are the essential parameters needed to map visual components to corresponding text, metrics, or narratives within a template-based report, see Sanossian ¶¶ 61 – templates, 97–feature vectors, and see mapping for claim 1) Claim 10: The rejection of claim 1 is incorporated. Pogrebtsov further teaches: wherein the imaging data defines pixel content of pixels that constitute the visual representation. (Graphical objects (or visual representation[s]) can take virtually any form—from traditional charts and maps to complex 3D images, audio-video displays, or multi-segmented hybrid layouts, for example on a LCD device, ¶¶ 126 and 151. Here imaging data defining pixel content is inherently in LCD pixels because the digital image is sampled and mapped into a fixed, discrete grid of individual, addressable cells (pixels) that are electronically controlled to manipulate light transmittance to form the visual representation.) Independent Claim 11: Claim(s) 11 is directed to computer-readable media for accomplishing the steps of the method in claim 1, and is rejected using similar rationale(s). Claims 12-14 and 18-20: Claims 12-14, and 18-20 are directed to computer-readable media for accomplishing the steps of the methods in claims 2-4, and 8-10, respectively, and are rejected using similar rationale(s). Claim(s) 5 and 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Pogrebtsov (US 20180349456 A1) in view of Sanossian (US 20210350068 A1), as applied to claims 1 and 11 above, and further in view of Grois; Dan et al. (hereinafter Grois – US 20230088688 A1). Claim 5: The rejection of claim 1 is incorporated. Pogrebtsov teaches that the machine learning be through neural networks, ¶ 154. However, Pogrebtsov does not appear to expressly teach, but Grois teaches: wherein the machine-learning model comprises a convolutional neural network. (that neural network processing can include a combination of neural networks, e.g., convolutional and graph neural networks, ¶ 28, that are most appropriate for accomplishing specific tasks/goals, ¶ 29). Accordingly, it would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to modify the method of Pogrebtsov to include wherein the machine-learning model comprises a convolutional neural network, as taught by Yoshida. One would have been motivated to make such a combination in order to improve the versatility and functionalities of the method, by combining using a combination of appropriate neural network types for different tasks/goals, Grois ¶ 29. Claim 15: The rejection of claim 11 is incorporated. Claim(s) 15 is directed to a computer-readable media for accomplishing the steps of the method in claim 5, and is rejected using similar rationale(s). Claim(s) 6 and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Pogrebtsov (US 20180349456 A1) in view of Sanossian (US 20210350068 A1), as applied to claims 1 and 11 above, and further in view of Chung; Chiew Yuan et al. (hereinafter Chung – US 20200335190 A1). Claim 6: The rejection of claim 1 is incorporated. Pogrebtsov further teaches: wherein the first UI object comprises a first chart or a first graph indicative of the metric based on the source data (the graphical objects (or “visual representations”) are charts, tables, etc., Pogrebtsov ¶¶ 97 and 126.) Pogrebtsov-Sanossian does not appear to expressly teach, but Chung teaches: and wherein the second UI object comprises a second chart or a second graph indicative of the metric based on the input data (an insight report [second UI object] can be in the form of a chart, fig. 8 and ¶ 90). Accordingly, it would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to further modify the method of Pogrebtsov to include and wherein the second UI object comprises a second chart or a second graph indicative of the metric based on the input data, as taught by Chung. One would have been motivated to make such a combination in order to improve the versatility and practicality of the method by displaying the insight report in a form that is known and effective in the art, e.g., a chart, for displaying insight reports, Chung ¶ 90. Claim 16: The rejection of claim 11 is incorporated. Claim 16 is directed to computer-readable media for accomplishing the steps of the method in claim 6, and is rejected using similar rationale(s). Claim(s) 7 and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Pogrebtsov (US 20180349456 A1) in view of Sanossian (US 20210350068 A1), as applied to claims 1 and 11 above, and further in view of Moore; Michael R. et al. (hereinafter Moore – US 20080301546 A1). Claim 7: The rejection of claim 1 is incorporated. Pogrebtsov, as modified, does not appear to expressly teach, but Moore teaches: wherein prominence for the second UI object within the layout is represented by a ratio of area occupied by the second UI object to a total layout area. (see ¶ 41 and fig. 4. The paragraph details a UI scaling method where an element's size is determined as a proportion of the total layout area to consistently represent its visual prominence. This approach uses normalized values (fractions/percentages) based on the overall layout dimensions, ensuring that the relative size and importance of elements remain consistent regardless of the final output's actual size or aspect ratio.) Accordingly, it would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to further modify the method of Pogrebtsov to include wherein prominence for the second UI object within the layout is represented by a ratio of area occupied by the second UI object to a total layout area, as taught by Moore. One would have been motivated to make such a combination in order to improve a flexibility, scalability, and self-adjustability of the method, by maintaining visual integrity across a wide variety of output sizes and aspect ratios, Moore ¶ 41. Claim 17: Claim 17 is directed to computer-readable media for accomplishing the steps of the method in claim 7, and is rejected using similar rationale(s). Response to Arguments The previous 112(b) rejections for claims 1-20 have been withdrawn due to claim amendment. Applicant's 103 arguments have been fully considered but they are not persuasive and/or art moot based on the new grounds of rejection presented above. First, the applicant alleges that “visual representation that is based on a layout associated with the imaging data and the imaging metadata, wherein the visual representation comprises a UI object based on the source data and another UI object based on derivative data”, at because “Pogrebtsov's particle visualization repositions graphical objects based on similarity scores and force-based positioning algorithms, rather than from any imaging data or imaging metadata defining a layout structure” and because “Pogrebtsov does not disclose separate UI objects for source data and derivative data within a predefined layout structure”, Remarks Page(s) 9-10. The examiner respectfully disagrees because: As explained in 103 rejection section above, Pogrebtsov teaches: wherein the output of the first UI object comprises a visual representation that is based on a layout associated with the imaging data and the imaging metadata, wherein the visual representation comprises a UI object based on the source data and another UI object based on derivative data, (discovery of relationships [derivative data] between data sets and/or between records of a data set [source data], ¶ 49, particles are arranged [based on a layout associated with imaging data] based on the selected fields on the discovered relationships [derivative data], as represented to a user by proximity of the objects to another, ¶¶ 50-51. the display is also associated with imaging metadata, e.g., such as color for the particles, ¶ 57 and fig. 1E, and distances between the particles, ¶¶ 52-53, 55 and 60 and figs. 1B-1C. particles [a UI object based on the source data] can be grouped as a result of analyzing the similarities between data [source data], Abstract and ¶¶ 59-60, and borders [another UI object based on derived data] are overlaid onto the canvas to provide a visual indication of the separation between the groups of particles, ¶ 58 and fig. 1I. Displayed borders are considered interface objects based on the similarity analysis [derived data] because they act as visual containers, or "dividers," that utilize the Gestalt principle of enclosure to explicitly separate, organize, and create a clear, actionable boundary between distinct groups of particles, and a person having ordinary skill in the art would have understood these borders. ) Concerning the allegations that “Pogrebtsov's particle visualization repositions graphical objects based on similarity scores and force-based positioning algorithms, rather than from any imaging data or imaging metadata defining a layout structure”: Even assuming that it is correct that “Pogrebtsov's particle visualization repositions graphical objects based on similarity scores and force-based positioning algorithms”, the claim is not limited by a step of “repositioning”, and the claim language doesn’t not preclude repositioning based on similarity scores and force-based positioning algorithms. Second, the applicant attacks Pogrebtsov for not teaching “a machine-learning model trained based on a group of annotated image assets, each associated with a placement and a prominence for one or more graphical objects in that annotated image asset”, Remarks Page 10. This is unpersuasive because Pogrebtsov is not relied upon to teach “a machine-learning model trained based on a group of annotated image assets, each associated with a placement and a prominence for one or more graphical objects in that annotated image asset”. One cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). Third, the applicant attacks Chamberlain in one or more arguments, Remarks Pages 10-14. These arguments are moot in view of the new grounds of rejection presented above, and since Chamberlain is no longer relied upon. Fourth, the applicant relies on the argument(s) above to allege patentability of the remaining claims. Remarks Page 14. The examiner respectfully disagrees for the reason(s) provide above. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Below is a list of these references, including why they are pertinent: Desouky; Anas et al. US 20210049487 A1, is pertinent to claim 1 for disclosing a machine learning engine for retrieving data and supplementing that data with relevant metadata, ¶ 60. Breeden; Jared et al. US 11675473 B1, is pertinent to claim 1 for disclosing extracting metric data and previewing the metric data on a user interface, Abstract. Any inquiry concerning this communication or earlier communications from the examiner should be directed to GABRIEL S MERCADO whose telephone number is (408)918-7537. The examiner can normally be reached Mon-Fri 8am-5pm (Eastern Time). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kieu Vu can be reached on (571) 272-4057. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Gabriel Mercado/Examiner, Art Unit 2171
Read full office action

Prosecution Timeline

Jun 26, 2023
Application Filed
May 17, 2025
Non-Final Rejection — §103
Aug 21, 2025
Response Filed
Nov 07, 2025
Final Rejection — §103
Feb 06, 2026
Request for Continued Examination
Feb 10, 2026
Response after Non-Final Action
Feb 19, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12543983
SYSTEMS AND METHODS FOR EMOTION PREDICTION
2y 5m to grant Granted Feb 10, 2026
Patent 12535942
BLOWOUT PREVENTER SYSTEM WITH DATA PLAYBACK
2y 5m to grant Granted Jan 27, 2026
Patent 12511024
Multi-Application Interaction Method
2y 5m to grant Granted Dec 30, 2025
Patent 12498838
CONTEXT-AWARE ADAPTIVE CONTENT PRESENTATION WITH USER STATE AND PROACTIVE ACTIVATION OF MICROPHONE FOR MODE SWITCHING USING VOICE COMMANDS
2y 5m to grant Granted Dec 16, 2025
Patent 12498843
Display of Book Section-Specific Fullscreen Recommendations for Digital Readers
2y 5m to grant Granted Dec 16, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
42%
Grant Probability
69%
With Interview (+26.4%)
3y 1m
Median Time to Grant
High
PTA Risk
Based on 198 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month