Prosecution Insights
Last updated: April 19, 2026
Application No. 18/466,617

Apparatus for Domain Generalization of Machine Learning Models, Methods and Computer Readable Recording Mediums Therefor

Non-Final OA §102
Filed
Sep 13, 2023
Examiner
COUSO, JOSE L
Art Unit
2667
Tech Center
2600 — Communications
Assignee
Hyperconnect LLC
OA Round
1 (Non-Final)
90%
Grant Probability
Favorable
1-2
OA Rounds
2y 5m
To Grant
98%
With Interview

Examiner Intelligence

Grants 90% — above average
90%
Career Allow Rate
1069 granted / 1185 resolved
+28.2% vs TC avg
Moderate +8% lift
Without
With
+8.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
21 currently pending
Career history
1206
Total Applications
across all art units

Statute-Specific Performance

§101
18.5%
-21.5% vs TC avg
§103
12.3%
-27.7% vs TC avg
§102
41.6%
+1.6% vs TC avg
§112
9.5%
-30.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1185 resolved cases

Office Action

§102
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Receipt is acknowledged of papers submitted under 35 U.S.C. 119(a)-(d), which papers have been placed of record in the file. 35 USC § 101 Statutory Analysis The claims do not recite any of the judicial exceptions enumerated in the 2019 Revised Patent Subject Matter Eligibility Guidance. Further, the claims do not recite any method of organizing human activity, such as a fundamental economic concept or managing interactions between people. Finally, the claims do not recite a mathematical relationship, formula, or calculation. Thus, the claims are eligible because they do not recite a judicial exception. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. §102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. Claims 1, 14, 17, 19 and 20 are rejected under 35 U.S.C. §102(a)(1) as being anticipated by Wang et al. (U.S. Patent Application Publication No. US 2020/0293828 A1) (hereafter referred to as “Wang”). With regard to claim 1, Wang describes setting a parameter of a first model and a parameter of a second model based on a pre-trained model (refer for example to paragraphs [0114] and [0531]); learning the second model by performing a predetermined task on a source domain model (refer for example to paragraph [0078]); estimating an unobservable gradient for model updates on an unseen domain (refer for example to paragraph [0089]) based on the parameter of the first model and the parameter of the second model (refer for example to paragraphs [0061], [0065], [0114] and [0531]); and updating the first model based on the estimated unobservable gradient (refer for example to paragraph [0531]). As to claim 14, Wang describes a non-transitory computer-readable medium including instructions, that when executed by a processor, perform a process (see Figure 18 and refer for example to paragraphs [0288] and [0293]) for domain generalization of a machine learning model, wherein the process comprises setting a parameter of a first model and a parameter of a second model based on a pre-trained model (refer for example to paragraphs [0114] and [0531]); learning the second model by performing a predetermined task on a source domain (refer for example to paragraph [0078]); estimating an unobservable gradient for model updates on an unseen domain refer for example to paragraph [0089]) based on the parameter of the first model and the parameter of the second model (refer for example to paragraphs [0061], [0065], [0114] and [0531]); and updating the first model based on the estimated unobservable gradient (refer for example to paragraph [0531]). As to claim 17, Wang describes a processor, memory accessible by the processor and instructions stored in the memory that when read by the processor direct the processor (see Figure 18 and refer for example to paragraphs [0288] and [0293]) to set a parameter of a first model and a parameter of a second model based on a pre-trained model (refer for example to paragraphs [0114] and [0531]); learn the second model by performing a predetermined task on a source domain (refer for example to paragraph [0078]); estimate an unobservable gradient for model updates on an unseen domain refer for example to paragraph [0089]) based on the parameter of the first model and the parameter of the second model (refer for example to paragraphs [0061], [0065], [0114] and [0531]); and update the first model based on the estimated unobservable gradient (refer for example to paragraph [0531]). With regard to claim 19, Wang describes a processor, memory accessible by the processor, and instructions stored in the memory that when read by the processor direct the processor (see Figure 18 and refer for example to paragraphs [0288] and [0293]) to retrieve a first model and a second model (refer for example to paragraphs [0114] and [0531]); learn the second model by classifying data of a first domain related to a first service (refer for example to paragraph [0078]); estimate an unobservable gradient (refer for example to paragraph [0089]) based on a parameter of the first model and a parameter of the second model (refer for example to paragraphs [0061], [0065], [0114] and [0531]); update the first model based on the estimated unobservable gradient (refer for example to paragraphs [0061], [0065], [0114] and [0531]); and classify data of a second domain related to a second service by using the updated first model (refer for example to paragraphs [0063] and [0086]). As to claim 20, Wang describes retrieving a first model and a second model (refer for example to paragraphs [0114] and [0531]); learning the second model by classifying data of a first domain related to a first service (refer for example to paragraph [0078]); estimating an unobservable gradient (refer for example to paragraph [0089]) based on a parameter of the first model and a parameter of the second model (refer for example to paragraphs [0061], [0065], [0114] and [0531]); updating the first model based on the estimated unobservable gradient (refer for example to paragraph [0531]); and classifying data of a second domain related to a second service by using the updated first model (refer for example to paragraphs [0063] and [0086]). Allowable Subject Matter Claims 2-13, 15-16 and 18 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Relevant Prior Art The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Xu, Sawada, Han, Amma, Tsai, Segu, Karlinsky, Arpit, Mangla, Sultana, Chiu, Ansari, Liu, Wu and Ran all disclose systems similar to applicant’s claimed invention. Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to Jose L. Couso whose telephone number is (571) 272-7388. The examiner can normally be reached on Monday through Friday from 5:30am to 1:30pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Bella, can be reached on 571-272-7778. The fax phone number for the organization where this application or proceeding is assigned is (571) 273-8300. Information regarding the status of an application may be obtained from the Patent Center information webpage on the USPTO website. For more information about the Patent Center, see https://www.uspto.gov/patents/apply/patent-center. Should you have questions about access to the Patent Center, contact the Patent Electronic Business Center (EBC) at 571-272-4100 or via email at: ebc@uspto.gov . Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. /JOSE L COUSO/Primary Examiner, Art Unit 2667 November 19, 2025
Read full office action

Prosecution Timeline

Sep 13, 2023
Application Filed
Jan 12, 2026
Non-Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602738
NOISE REDUCTION CIRCUIT WITH DEMOSAIC PROCESSING
2y 5m to grant Granted Apr 14, 2026
Patent 12597096
REAL-TIME FACIAL RESTORATION AND RELIGHTING IN VIDEOS USING FACIAL ENHANCEMENT NEURAL NETWORKS
2y 5m to grant Granted Apr 07, 2026
Patent 12586155
ADAPTIVE MODEL FOR SUPER-RESOLUTION
2y 5m to grant Granted Mar 24, 2026
Patent 12579719
MEDICAL IMAGE PROCESSING APPARATUS AND MEDICAL IMAGE PROCESSING METHOD
2y 5m to grant Granted Mar 17, 2026
Patent 12579619
IMAGE EFFECT RENDERING
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
90%
Grant Probability
98%
With Interview (+8.2%)
2y 5m
Median Time to Grant
Low
PTA Risk
Based on 1185 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month