Prosecution Insights
Last updated: April 19, 2026
Application No. 18/618,448

MACHINE LEARNING-BASED MANAGEMENT OF FEEDBACK DATA

Final Rejection §103
Filed
Mar 27, 2024
Examiner
MONIKANG, GEORGE C
Art Unit
2692
Tech Center
2600 — Communications
Assignee
DELL PRODUCTS, L.P.
OA Round
2 (Final)
74%
Grant Probability
Favorable
3-4
OA Rounds
3y 0m
To Grant
82%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
701 granted / 941 resolved
+12.5% vs TC avg
Moderate +7% lift
Without
With
+7.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
48 currently pending
Career history
989
Total Applications
across all art units

Statute-Specific Performance

§101
3.9%
-36.1% vs TC avg
§103
58.6%
+18.6% vs TC avg
§102
22.5%
-17.5% vs TC avg
§112
4.0%
-36.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 941 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s arguments with respect to claim(s) 1-4, 6, 8-14, 16, 18-24 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-4, 6, 8-14, 16, 18-24 are rejected under 35 U.S.C. 103 as being unpatentable over Schwartz et al, US Patent Pub. 20250261569 P1, in view of Sinn et al, US Patent Pub. 20200133641 A1. Re Claim 1, Schwartz et al discloses an apparatus comprising: at least one processing device comprising a processor coupled to a memory (para 0036: system processor along with memory), the at least one processing device being configured to: modify first data obtained from one or more sources, wherein the modifying comprises adding user context data to the first data to generate second data (para 0022: input prompt/first data is modified to improve context to create final prompt/second data), the second data representing the first data supplemented with a per-user context (para 0022: input prompt/first data is modified to improve context to create final prompt/second data); and in response to receipt of a query, generate a response to the query using at least one generative language model supplemented by a retrieval augmented generation process based on at least a portion of the second data (para 0023: final prompt with context is input to a GPT LLM/generative language model); but fails to disclose wherein the user context data further comprises one or more identifiers of one or more users; generate one or more virtual user feedback profiles based on at least a portion of the second data. However, Sinn et al discloses a system that teaches the concept of compiling or updating user virtual profiles based on various user actions that includes one or more searches, where the user virtual profiles are identifiers (Sinn et al, para 0071: one or more searches, where the subsequent searches are interpreted as the second context search). It would have been obvious to modify the Schwartz et al system such that it compiles virtual user profiles based on its queries (which include the second query data) as taught in Sinn et al for the purpose of creating a digital representation of the users that can be referenced when necessary. Re Claim 2, the combined teachings of Schwartz et al and Sinn et al disclose the apparatus of claim 1, wherein the modifying further comprises generating a mapping between the first data and the second data, wherein the mapping comprises links between the first data and the second data (Schwartz et al, para 0022: input prompt/first data is modified to improve context to create final prompt/second data; wherein the context modified input prompt is naturally mapped to the initial input prompt). Re Claim 3, the combined teachings of Schwartz et al and Sinn et al disclose the apparatus of claim 1, wherein the user context data further comprises one or more parameters attributable to one or more users such that the first data is supplemented based on the one or more parameters to form the per-user context in the second data (Schwartz et al, para 0022: input prompt/first data is modified to improve context to create final prompt/second data). Re Claim 4, the combined teachings of Schwartz et al and Sinn et al disclose the apparatus of claim 3, wherein the one or more parameters attributable to the one or more users comprise a user familiarity with one or more subjects associated with the first data (Schwartz et al, para 0011: context or background information, whereby background information implies familiarity). Re Claim 6, the combined teachings of Schwartz et al and Sinn et al disclose the apparatus of claim 1, wherein the at least one generative language model comprises a large language model (Schwartz et al, para 0023: final prompt with context is input to a GPT LLM/generative language model). Re Claim 8, the combined teachings of Schwartz et al and Sinn et al disclose the apparatus of claim 6, wherein the generating of the response to the query further comprises translating the query into a prompt for the large language model (Schwartz et al, para 0023: final prompt with context is input to a GPT LLM/generative language model; whereby LLM includes translation capabilities). Re Claim 9, the combined teachings of Schwartz et al and Sinn et al disclose the apparatus of claim 1, wherein the first data is obtained from the one or more sources using a data scraping process (Schwartz et al, para 0012: LLM models have been trained with diverse text sources, including books, articles, websites, and other textual data from the internet). Re Claim 10, the combined teachings of Schwartz et al and Sinn et al disclose the apparatus of claim 1, but fail to explicitly disclose wherein the one or more sources of the first data comprise one or more of external data and internal data with respect to a given entity associated with managing the processing device. Since Schwartz et al teaches models, data scraping from multiple sources (Schwartz et al, para 0012), it would have been obvious to modify the Schwartz et al system such that the sources include internal and external sources for the purpose of diversifying and optimizing the sources. Claim 11 has been analyzed and rejected according to claim 1. Claim 12 has been analyzed and rejected according to claim 2. Claim 13 has been analyzed and rejected according to claim 3. Claim 14 has been analyzed and rejected according to claim 4. Claim 16 has been analyzed and rejected according to claim 6. Claim 18 has been analyzed and rejected according to claim 8. Claim 19 has been analyzed and rejected according to claims 9-10. Claim 20 has been analyzed and rejected according to claim 1. Re Claim 21, the combined teachings of Schwartz et al and Sinn et al disclose the method of claim 20, wherein the modifying further comprises generating a mapping between the first data and the second data, wherein the mapping comprises links between the first data and the second data (Schwartz et al, para 0022: input prompt/first data is modified to improve context to create final prompt/second data; wherein the first input prompt/first data and the final prompt/second data are linked). Re Claim 22, the combined teachings of Schwartz et al and Sinn et al disclose the method of claim 20, wherein the user context data further comprises one or more parameters attributable to one or more users such that the first data is supplemented based on the on one or more parameters to form the per-user context in the second data (Sinn et al, para 0071: one or more user searches(including second context data as modified with Schwartz et al) are used to compile user virtual profile, wherein the user virtual profile will henceforth be associated with the one or more searches and will naturally act as parameters for said user virtual profile). Re Claim 23, the combined teachings of Schwartz et al and Sinn et al disclose the method of claim 22, but fails to explicitly disclose wherein the one or more parameters attributable to the one or more users comprise a user familiarity with one or more subjects associated with the first data. Since Sinn et al discloses different components that can be used to compile the user virtual profile (Sinn et al, para 0071), it would have been obvious for one of ordinary skill in the art to modify Sinn et al as used to modify Schwartz et al such that one of the parameters includes familiar subject matter of the user for the purpose of fine tuning the user virtual profile. Re Claim 24, the combined teachings of Schwartz et al and Sinn et al disclose the method of claim 20, wherein the at least one generative language model comprises a large language model (Schwartz et al, abstract: large language model; paras 0009-0012: large language model). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to GEORGE C MONIKANG whose telephone number is (571)270-1190. The examiner can normally be reached Mon. - Fri., 9AM-5PM, ALT. Fridays off. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Carolyn R Edwards can be reached at 571-270-7136. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /GEORGE C MONIKANG/Primary Examiner, Art Unit 2692 3/13/2026
Read full office action

Prosecution Timeline

Mar 27, 2024
Application Filed
Oct 06, 2025
Non-Final Rejection — §103
Jan 08, 2026
Response Filed
Mar 13, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604126
VEHICULAR MICROPHONE AND VEHICLE
2y 5m to grant Granted Apr 14, 2026
Patent 12596518
MICROPHONE INTERFACE, VEHICLE, CONNECTION METHOD, AND PRODUCTION METHOD
2y 5m to grant Granted Apr 07, 2026
Patent 12596888
CONTEXTUALIZATION OF GENERATIVE LANGUAGE MODELS BASED ON ENTITY RESOURCE IDENTIFIERS
2y 5m to grant Granted Apr 07, 2026
Patent 12598428
TRANSDUCER AND ELECTRONIC DEVICE
2y 5m to grant Granted Apr 07, 2026
Patent 12591749
MACHINE LEARNING SYSTEM FOR MULTI-DOMAIN LONG DOCUMENT CLUSTERING
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
74%
Grant Probability
82%
With Interview (+7.2%)
3y 0m
Median Time to Grant
Moderate
PTA Risk
Based on 941 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month