Prosecution Insights
Last updated: April 19, 2026
Application No. 17/749,772

DATA STRUCTURE CORRECTION USING NEURAL NETWORK MODEL

Final Rejection §103
Filed
May 20, 2022
Examiner
BAKER, IRENE H
Art Unit
2152
Tech Center
2100 — Computer Architecture & Software
Assignee
Microsoft Technology Licensing, LLC
OA Round
2 (Final)
54%
Grant Probability
Moderate
3-4
OA Rounds
3y 0m
To Grant
81%
With Interview

Examiner Intelligence

Grants 54% of resolved cases
54%
Career Allow Rate
129 granted / 238 resolved
-0.8% vs TC avg
Strong +27% interview lift
Without
With
+26.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
32 currently pending
Career history
270
Total Applications
across all art units

Statute-Specific Performance

§101
26.3%
-13.7% vs TC avg
§103
42.0%
+2.0% vs TC avg
§102
4.6%
-35.4% vs TC avg
§112
21.4%
-18.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 238 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Introductory Remarks In response to communications filed on 9 October 2025, claims 1, 12-13, 18, and 20 are amended per Applicant's request. No claims were cancelled. No claims were withdrawn. No new claims were added. Therefore, claims 1-20 are presently pending in the application, of which claims 1, 13, and 18 are presented in independent form. The previously raised 112 rejection of the pending claims is withdrawn in view of the amendments to the claims. The previously raised 103 rejection of the pending claims is withdrawn in view of the amendments to the claims. A new ground(s) of rejection has been issued for claims 1-12 and 18-20. The previous 103 rejection for claims 13-17 has been maintained. Response to Arguments Applicant’s arguments filed 9 October 2025 with respect to the rejection of the claims under 35 U.S.C. 112 (see Remarks, p. 7) have been fully considered and are persuasive. The 112 rejection has been accordingly withdrawn. Applicant’s arguments filed 9 October 2025 with respect to the rejection of the claims under 35 U.S.C. 103 (see Remarks, p. 8-9) have been fully considered but are not persuasive. Applicant solely argues that the amended claim features overcome the prior art. However, the 103 rejection has been modified below to conform with the newly amended claim language (for claims 1-12 and 18-20), while maintained for claims 13-17. See the 103 rejection below for further details. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-7, 9-12, and 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over Shoemaker et al. (“Shoemaker”) (US 2015/0185995 A1), in view of Kulkarni et al. (“Kulkarni”) (US 2022/0107852 A1). Regarding claim 1: Shoemaker teaches A method for data structure modification, the method comprising: obtaining first data structures that represent user interactions with first user content by a user of a client device; labeling the first data structures using content features of the first user content; training a [machine learning] model using the labeled first data structures to obtain user-specific weights for the user of the client device (Shoemaker, [0022], where user history 110 (e.g., previous actions, behavior, habits, data associated with a user) includes previous actions 111-114 provided to (e.g., read by) machine learning engine 120 to generate predictive models (i.e., the “generation” of predictive models implying “training a…model using the…first data structures”). See also Shoemaker, [0035], where the machine learning 120 may process user history 110 to produce predictive models 130. See Shoemaker, [0048], where models have assigned weights to one or more pieces or fields (i.e., “labels”) of the extracted information about content or action (IACA) of previous actions (see also, e.g., Shoemaker, [0022]). See Shoemaker, [0029] and [0032-0034], where the information about content includes one or more fields of the content that is structured, e.g., stored in or retrieved from a database, where any fields of the content can be used as information about content, and fields can be different based on content. See Shoemaker, [0046], where IACA associated with action 240 is extracted, e.g., a data type of photo, a timestamp, a location of home, and an identification of a person on the photo being the relative (i.e., “labeling the first data structures using content features of the first user content”)); receiving a second data structure that represents user interactions with second user content by the user; obtaining a predicted value for the second data structure based on an output of the trained [machine learning] model using content features of the second user content and the user-specific weights as inputs to the trained [machine learning] model; and modifying the second data structure to include the predicted value … (Shoemaker, [0017], where prediction engine 150 identifies and uses one or more predictive models 130 based on the user actions 140 to provide the user with one or more predictions 160 (i.e., “based on an output of the trained [machine learning] model”); recall from Shoemaker in the above limitations with respect to the “using content features of the second user content and the user-specific weights as inputs to the trained [machine learning] model”). See Shoemaker, [0050], where one or more predictions are presented to the user, the user may provide input to identify or select one of the predictions, or input that is not in any of the predictions (i.e., “obtaining a predicted value for the second data structure”). The user’s input is provided as feedback, e.g., history 110. Machine learning 120 may incorporate or account for the user’s feedback in changing one or more models (i.e., “data structures”) that were already generated, e.g., predictive models 130 may be changed based on the user feedback. See also Shoemaker, [0048], where models may have assigned weights to one or more pieces or fields of the extracted information about content or action (IACA) of previous actions (see also Shoemaker, [0022])). Shoemaker does not appear to explicitly state that the machine learning model is a neural network model; and when, based on comparing the predicted value with the second data structure, it is determined the second data structure is inconsistent with the predicted value, the second data structure is modified. Kulkarni teaches the machine learning model is a neural network model (Kulkarni, [0043], where the natural language model 202 comprises a neural network); and when, based on comparing the predicted value with the second data structure, it is determined the second data structure is inconsistent with the predicted value, the second data structure is modified (Kulkarni, [0089], where user activity sequence system 104 utilizes multiple portions of the series of sequential tokens to generate the predicted activity event at act 314, where the natural language model 202 compares the predicted activity events generated based on the different portions of the series of sequential tokens. Based on the comparison, the natural language model 202 can confirm the accuracy of the predicted activity event and/or add a confidence score to the predicted activity event. In the case that the first and second predicted activity events differ, the system may select only one (implying that the second predicted activity event may be selected)). Although Kulkarni does not appear to explicitly state that a second data structure comprising predicted activity events is persisted and thus “modified” as claimed, one of ordinary skill in the art would have found it obvious to have modified Kulkarni to have persisted such information with the motivation of avoiding having to recompute predictions for the next actions that a user may take (e.g., as some users may be very routine, and thus it would make more sense to persist such information instead of running predictions each and every time). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the teachings of Shoemaker and Kulkarni with the motivation of producing more accurate results despite distortions or less-than-perfect input data.1,2 Furthermore, it would have been obvious to one of ordinary skill in the art to have compared predicted events with actual stored events with the motivation of ensuring that relevant information is updated as needed, e.g., only when inconsistent, thereby conserving processing resources. Regarding claim 2: Shoemaker as modified teaches The method of claim 1, wherein the user interactions with the first user content and the user interactions with the second user content are active interactions (Shoemaker, [0030], where information about one or more actions taken in association with content includes history usage information. See also Shoemaker, [0061], where actions may include selecting who to share content with, selecting how the content is shared, e.g., selecting what application to share the content. See also, e.g., Shoemaker, [0019], where user actions may include browsing on the Internet). Regarding claim 3: Shoemaker as modified teaches The method of claim 1, wherein labeling the first data structures comprises labeling the first data structures using an identifier of an author of the first user content (Shoemaker, [0024], where information about content can be metadata including descriptive metadata such as description of a resource for identification and retrieval including author. See also Shoemaker, [0029], where if the content is an email message, and the fields (e.g., To, From, Subject, CC, BCC, body, etc.) can be used as information about content, where IACA fields may comprise any fields and any number of fields (Shoemaker, [0032])). Regarding claim 4: Shoemaker as modified teaches The method of claim 1, wherein labeling the first data structures comprises labeling the first data structures using a timestamp indicating when the first user content was viewed (Shoemaker, [0024] and [0046], where fields extracted from the information about content associated with an action 240 may include timestamp. See Shoemaker, [0019-0021], [0025], and [0060], where user actions may be browsing (i.e., “view[ing] first user content”) on the Internet, e.g., the time when the webpage is visited (i.e., “viewed”)). Regarding claim 5: Shoemaker as modified teaches The method of claim 1, wherein labeling the first data structures comprises, for each of the first data structures, creating a training data structure that includes one or more labels from the first user content as first fields and portions of a corresponding first data structure as second fields (Shoemaker, [0022], where prediction generator 135 may trigger machine learning 120 to process history 110. See Shoemaker, [0016], where the model generation portion uses machine learning 120 to analyze previous actions of a user, the behavior and/or habits of the user, and data associated with the user (collectively referred to as user history 110), to generate predictive models 130 associated with the user (i.e., implying that the history 110 is a form of “training data”, i.e., corresponding to the claimed “training data structure”). See Shoemaker, [0046], where IACA associated with action 240 is extracted, e.g., a data type of photo, a timestamp, a location of home, and an identification of a person on the photo being the relative (i.e., “first fields…second fields”)). Regarding claim 6: Shoemaker as modified teaches The method of claim 1, wherein the user interactions with the first user content include a passive behavior of the user (Shoemaker, [0025] and [0030], where information about content includes any data recorded or generated about the content, including the location of the user when the webpage is visited, information about the network, system, devices, operating systems, applications, software, etc. used to perform the actions, date, time, location when the actions are performed, etc.). Regarding claim 7: Shoemaker as modified teaches The method of claim 1, wherein the content features of the first user content include one or more of portions of the first user content that are viewed by the user, user identifiers for other users associated with the user interactions, and timestamps associated with the user interactions (Shoemaker, [0023-0025] and [0029-0030], where information about content includes any data recorded or generated about the content, including time when the action was performed, as well as description of a resource such as file name, title, etc., as well as the URL of the webpage visited by the user. If the content is email, then the IACA fields may include the subject and body that can be used as information about content). Regarding claim 9: Shoemaker as modified teaches The method of claim 1, wherein obtaining the predicted value comprises: configuring the trained neural network model using the user-specific weights (Shoemaker, [0022], where user history 110 (e.g., previous actions, behavior, habits, data associated with a user) includes previous actions 111-114 provided to (e.g., read by) machine learning engine 120 to generate predictive models (i.e., the “generation” of predictive models implying “training a…model using the…first data structures”). See also Shoemaker, [0035], where the machine learning 120 may process user history 110 to produce predictive models 130. See Shoemaker, [0048], where models have assigned weights to one or more pieces or fields (i.e., “labels”) of the extracted information about content or action (IACA) of previous actions (see also, e.g., Shoemaker, [0022])); and providing the content features to input nodes of the trained neural network model (Shoemaker, [0017], where prediction engine 150 identifies and uses one or more predictive models 130 based on the user actions 140 to provide the user with one or more predictions 160, which may include one or more action options. Recall from Shoemaker, [0029] and [0032-0034], where the information about content includes one or more fields of the content that is structured, e.g., stored in or retrieved from a database, where any fields of the content can be used as information about content, and fields can be different based on content. See Shoemaker, [0046], where IACA associated with action 240 is extracted, e.g., a data type of photo, a timestamp, a location of home, and an identification of a person on the photo being the relative). Regarding claim 10: Shoemaker as modified teaches The method of claim 1, wherein modifying the second data structure comprises updating a field of the second data structure to include the predicted value (Copper, [0033-0039] and [0067], where the system generates replacement values or invalid or missing values in historical data records used for training a primary (machine learning) model or in new data records introduced to the computing system for processing by a primary (machine learning) model after the model is placed in service, where the replacement model data structure includes information previously placed in the field status data structure 440). Regarding claim 11: Shoemaker as modified teaches The method of claim 1, wherein receiving the second data structure comprises receiving the second data structure from a signal service that generates the second data structure based on the user interactions with the second user content by the user (Shoemaker, [0016], where history 110 can be the history of one or more applications, websites, services of different providers, etc., the user has given permissions or consents to gather the user’s history and user actions. See Shoemaker, [0027], where data recorded or generated about the content may be device-generated, where devices may communicate with a cellular network (Shoemaker, [0072])). Although Shoemaker does not appear to explicitly state that the data is generated “from a signal service” as claimed, one of ordinary skill in the art would have been suggested by Shoemaker’s disclosure such that the cellular network performs this generation with the motivation of offloading computing resources to be performed by another device instead of the user’s device (which may, for example, require more resource consumption). Regarding claim 12: Shoemaker as modified teaches The method of claim 1, wherein the method further comprises processing a telemetry log for the client device that represents the user interactions with the first user content (Shoemaker, [0030], where history usage information includes usage records or usage logs from, e.g., a device (see, e.g., Shoemaker, [0027])); and wherein obtaining the predicted value comprises cross-verifying the telemetry log with the output of the trained neural network model (Shoemaker, [0018] and [0050], where the final action 180 (e.g., the user’s selection) is provided as feedback to the history 110 (i.e., “telemetry log”; see, e.g., Shoemaker, [0030], where history usage information may be stored as usage records or usage logs), and machine learning 120 may incorporate or account for the user’s feedback in making changes to one or more models already generated. See Resnick, [0029], with respect to the model being a “neural network”). Regarding claim 18: Claim 18 recites substantially the same claim limitations as claim 1, and is rejected for the same reasons. Note that Shoemaker teaches A system for processing data structures that represent user interactions, the system comprising: at least one processor; memory storing instructions that, when executed by the at least one processor, cause the system to perform a set of operations, the set of operations comprising [the claimed steps] (Shoemaker, [0063], [0068], and [0075], where the disclosed systems may be implemented by processor(s) by loading computer executable instructions stored on a medium onto one or more of the processors, such media including signals). Regarding claim 19: Claim 19 recites substantially the same claim limitations as claim 7, and is rejected for the same reasons. Regarding claim 20: Claim 20 recites substantially the same claim limitations as claim 12, and is rejected for the same reasons. Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Shoemaker et al. (“Shoemaker”) (US 2015/0185995 A1), in view of Kulkarni et al. (“Kulkarni”) (US 2022/0107852 A1), in further view of Wheatley et al. (“Wheatley”) (US 2015/0181289 A1). Regarding claim 8: Shoemaker as modified teaches The method of claim 7, but does not appear to explicitly teach wherein the second data structure is modified to have changed timestamps. Wheatley teaches wherein the second data structure is modified to have changed timestamps (Wheatley, [0040], where the application may compare the time of an inconsistent activity to the time associated with events of the user. This implies that the data structure may have timestamps stored. Although Wheatley does not appear to explicitly state that the second data structure is “modified” to have changed timestamps, one of ordinary skill in the art would have found it obvious to have modified Wheatley to have such changed timestamps with the motivation of being able to quickly pinpoint certain activities/events, rather than looking at a range of activities/events, i.e., faster processing). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the teachings of Shoemaker as modified and Wheatley with the motivation of being able to accurately determine whether activities are consistent/inconsistent. Claims 13-17 are rejected under 35 U.S.C. 103 as being unpatentable over Shoemaker et al. (“Shoemaker”) (US 2015/0185995 A1), in view of Copper (“Copper”) (US 2019/0340533 A1). Regarding claim 13: Shoemaker teaches A method for data structure modification, the method comprising: processing a telemetry log representing first user interactions with first user content by a user of a client device to identify the first user interactions (Shoemaker, [0030], where history usage information includes usage records or usage logs from, e.g., a device (see, e.g., Shoemaker, [0027])); generating a predicted data structure that corresponds to the identified first user interactions based on an output of a trained neural network model using content features of the first user content and user-specific weights as inputs to the trained neural network model, including mapping entries within the telemetry log to fields within the predicted data structure, and populating the fields within the predicted data structure with data based on the mapped entries within the telemetry log (Shoemaker, [0022], where user history 110 (e.g., previous actions, behavior, habits, data associated with a user) (i.e., “telemetry log”) includes previous actions 111-114 provided to (e.g., read by) machine learning engine 120 to generate predictive models (i.e., the “generation” of predictive models implying “training a…model using the…first data structures”). See also Shoemaker, [0035], where the machine learning 120 may process user history 110 to produce predictive models 130. See Shoemaker, [0048], where models have assigned weights to one or more pieces or fields (i.e., “labels”) of the extracted information about content or action (IACA) of previous actions (see also, e.g., Shoemaker, [0022]). See Shoemaker, [0017], where prediction engine 150 identifies and uses one or more predictive models 130 based on the user actions 140 to provide the user with one or more predictions 160, which may include one or more action options. See Shoemaker, [0029] and [0032-0034], where the information about content includes one or more fields of the content that is structured, e.g., stored in or retrieved from a database, where any fields of the content can be used as information about content, and fields can be different based on content. See Shoemaker, [0046], where IACA associated with action 240 is extracted, e.g., a data type of photo, a timestamp, a location of home, and an identification of a person on the photo being the relative. See also, e.g., Shoemaker, [0058], where a camera application can pre-populate one or more sharing settings with predicted privacy preferences (implying “mapped entries”) as claimed)) … . Shoemaker does not appear to explicitly teach identifying discrepancies between the predicted data structure and first data structures corresponding to the first user interactions of the user of the client device; and updating, using the predicted data structure, the first data structures based on the identified discrepancies. Copper teaches identifying discrepancies between the predicted data structure and first data structures corresponding to the first user interactions of the user of the client device; and updating, using the predicted data structure, the first data structures based on the identified discrepancies (Copper, [0052], where the system may identify missing or invalid data, where this is performed on new data records introduced to the computing system for processing (Copper, [0067]). See Copper, [0033-0039] and [0067], where the system generates replacement values or invalid or missing values in historical data records used for training a primary (machine learning) model or in new data records introduced to the computing system for processing by a primary (machine learning) model after the model is placed in service, where the replacement model data structure includes information previously placed in the field status data structure 440. See Shoemaker with respect to “[the first data structures previously generated] based on second user interactions of the user of the client device”, i.e., Shoemaker, [0050], where one or more predictions are presented to the user, the user may provide input to identify or select one of the predictions, or input that is not in any of the predictions. The user’s input is provided as feedback, e.g., history 110. Machine learning 120 may incorporate or account for the user’s feedback in changing one or more models (i.e., “data structures”) that were already generated, e.g., predictive models 130 may be changed based on the user feedback). Although Copper does not appear to explicitly state that the primary (machine learning) algorithm/model is a neural network, one of ordinary skill in the art would have found it obvious to have modified Copper to include a neural network with the motivation of producing more accurate results despite distortions or less-than-perfect input data.3,4 It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the teachings of Shoemaker and Copper (hereinafter “Shoemaker as modified”) with the motivation of generating a more complete data set, i.e., by including missing data values with inferred (i.e., predicted) data values, for training purposes even for data that was previously missing those values, thereby increasing the utility and accuracy of computer implementations and executions of such algorithms (Copper, [0002]). Regarding claim 14: Shoemaker as modified teaches The method of claim 13, wherein: the identified discrepancies include an inconsistent field of an existing data structure; and updating the first data structures comprises updating the inconsistent field with a predicted value from the predicted data structure (Copper, [0033-0039] and [0067], where the system generates replacement values or invalid or missing values in historical data records used for training a primary (machine learning) model or in new data records introduced to the computing system for processing by a primary (machine learning) model after the model is placed in service, where the replacement model data structure includes information previously placed in the field status data structure 440). Regarding claim 15: Shoemaker as modified teaches The method of claim 13, wherein: the identified discrepancies include a missing data structure; and updating the first data structures comprises storing the predicted data structure with the first data structures (Copper, [0033-0039] and [0067], where the system generates replacement values or invalid or missing values in historical data records used for training a primary (machine learning) model or in new data records introduced to the computing system for processing by a primary (machine learning) model after the model is placed in service, where the replacement model data structure includes information previously placed in the field status data structure 440). Regarding claim 16: Shoemaker as modified teaches The method of claim 13, wherein: the first user interactions include reading an email by the user; the first user content includes the email; and the predicted data structure comprises a plurality of fields including a duration field that indicates a time period for reading the email (Shoemaker, [0029], where content may be an email message, and the fields can be used as information about content. See also Shoemaker, [0019], where actions include browsing on the Internet. See Shoemaker, [0024-0025], where information about content may be metadata, including timestamps, as well as a time when a webpage is visited by a user). Although Shoemaker does not appear to explicitly state that the action pertains to reading an email by the user as claimed, or that the fields include a duration field that indicates a time period for reading the email as claimed, the claimed invention does not distinguish over the prior art because the differences in the claim limitations and the prior art’s disclosure are only found in the nonfunctional descriptive material and are not functionally involved in the steps recited. The various steps of claim 13 would have been performed the same regardless of the specific data involved (i.e., reading an email with a duration field indicating a time period for reading that email as claimed, or some other data). Thus, this descriptive material will not distinguish the claimed invention from the prior art in terms of patentability. See In re Gulack, 703 F.2d 1381, 1385, 217 USPQ2d 401, 404 (Fed. Cir. 1983); In re Lowry, 32 F.3d 1579, 32 USPQ2d 1031 (Fed. Cir. 1994). Therefore, it would have been obvious to a person of ordinary skill in the art to have referred to Shoemaker’s teachings in making the claimed invention, because such data does not functionally relate to the steps in the method claimed and because the subjective interpretation of the data does not patentably distinguish the claimed invention over the prior art. Regarding claim 17: Claim 17 recites substantially the same claim limitations as claim 7, and is rejected for the same reasons. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to IRENE BAKER whose telephone number is (408)918-7601. The examiner can normally be reached M-F 8-5PM PT. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, NEVEEN ABEL-JALIL can be reached at (571)270-0474. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /IRENE BAKER/Primary Examiner, Art Unit 2152 2 January 2026 1 Frazier et al. US 2002/0120435 A1 at [0002] (“Advantages of artificial neural networks include their ability to learn and their ability to produce relatively more accurate results (than those produced by standard computer systems) despite distortions in input data”). 2 Grayson et al. US 5,111,531 A at [Background] (“The potential advantage of neural nets is that, unlike classification logic, they can work with less than perfect input information”). 3 Frazier et al. US 2002/0120435 A1 at [0002] (“Advantages of artificial neural networks include their ability to learn and their ability to produce relatively more accurate results (than those produced by standard computer systems) despite distortions in input data”). 4 Grayson et al. US 5,111,531 A at [Background] (“The potential advantage of neural nets is that, unlike classification logic, they can work with less than perfect input information”).
Read full office action

Prosecution Timeline

May 20, 2022
Application Filed
Jul 06, 2025
Non-Final Rejection — §103
Oct 09, 2025
Response Filed
Oct 10, 2025
Applicant Interview (Telephonic)
Oct 10, 2025
Examiner Interview Summary
Jan 02, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602368
ANOMALY DETECTION DATA WORKFLOW FOR TIME SERIES DATA
2y 5m to grant Granted Apr 14, 2026
Patent 12591890
CONCURRENT STATE MACHINE PROCESSING USING A BLOCKCHAIN
2y 5m to grant Granted Mar 31, 2026
Patent 12566880
SEAMLESS UPDATING AND RECONCILIATION OF DATABASE IDENTIFIERS GENERATED BY DIFFERENT AGENT VERSIONS
2y 5m to grant Granted Mar 03, 2026
Patent 12566790
LAKEHOUSE METADATA CHANGE DETERMINATION METHOD, DEVICE, AND MEDIUM
2y 5m to grant Granted Mar 03, 2026
Patent 12536138
FILE SYSTEM REDIRECTOR SERVICE IN A SCALE OUT DATA PROTECTION APPLIANCE
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
54%
Grant Probability
81%
With Interview (+26.7%)
3y 0m
Median Time to Grant
Moderate
PTA Risk
Based on 238 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month