Prosecution Insights
Last updated: April 19, 2026
Application No. 17/941,892

METHOD AND APPARATUS FOR LEARNING LOCALLY-ADAPTIVE LOCAL DEVICE TASK BASED ON CLOUD SIMULATION

Final Rejection §103§112§Other
Filed
Sep 09, 2022
Examiner
KEATON, SHERROD L
Art Unit
2148
Tech Center
2100 — Computer Architecture & Software
Assignee
ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
OA Round
2 (Final)
52%
Grant Probability
Moderate
3-4
OA Rounds
4y 6m
To Grant
88%
With Interview

Examiner Intelligence

Grants 52% of resolved cases
52%
Career Allow Rate
295 granted / 563 resolved
-2.6% vs TC avg
Strong +36% interview lift
Without
With
+36.1%
Interview Lift
resolved cases with interview
Typical timeline
4y 6m
Avg Prosecution
32 currently pending
Career history
595
Total Applications
across all art units

Statute-Specific Performance

§101
14.9%
-25.1% vs TC avg
§103
62.0%
+22.0% vs TC avg
§102
11.1%
-28.9% vs TC avg
§112
8.0%
-32.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 563 resolved cases

Office Action

§103 §112 §Other
DETAILED ACTION This action is in response to the filing of 10-2-2025. Claims 1-17 are pending and have been considered below: Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: server receiver, relearning unit and transmitter in claims 11-12 and 15-17. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. It appears the server receiver (210), relearning unit (260) and transmitter (270) are found in the local device (100) from Figure 10. Claim Rejections - 35 USC § 112 Rejection of Claims 11 and 17 has been withdrawn. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Nagarajan et al. (“Nagarajan” 10981272 B1) in view of “Domain Randomization for Sim2Real Transfer” Lilian Weng (“Weng”) 5-5-2019 Pages 1-13. Claim 1: Nagarajan discloses a method for learning a locally-adaptive local device task, the method comprising: receiving observation data about a surrounding environment recognized by a local device (Column 13, Line 65-Column 14, Line 25; observation of surrounding environment using sensor (local devices)); performing a domain randomization based on the observation data and a failure type of a task assigned to the local device and relearning a policy network of the assigned task based on the domain randomization (Column 5, Lines 35-65; simulation variations equivalent to domain randomization); and updating a policy network of the local device for the assigned task by transmitting the relearned policy network to the local device (Column 5, Lines 35-65; simulation variations equivalent to domain randomization performed and update sent out to devices). Nagarajan further discloses wherein the local device is configured to obtain image information of a target object to be tasked in a local environment, classify a class of the object based on the image information, and download a mesh model with a most similar shape to the object based on the class from the cloud server, and perform a local simulation using the mesh model for predicting possibility of success for the task (Column 9, Lines 52-57 (local area network provides operates at a local device); Column 14, Lines 37-48 (camera capture image), Column 15, Lines 8-14 (categorize for classification), Column 15, Lines 49-55 (three-dimensional/mesh model), Column 17, Lines 15-25 (repeat process) and Column 19, Lines 15-24 (update successful attempt)); Nagarajan discloses simulation variation which is closely related to domain randomization, however to disclose explicitly domain randomization, Weng is provided. Weng discloses a system where robotics are trained through model transfer (Page 1, introduction). Further, Weng discloses methods for simulating robotic operation including Domain randomization, where training happens in a source domain with randomization applied. The randomized parameters include position, color, texture or lighting (Pages 2-4, Uniform Domain Randomization). Therefore it would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to use a known technique to improve similar devices in the same way and explicitly utilize domain randomization for the simulations of Nagarajan. One would have been motivated to provide the functionality because this model can effectively adapt to real world environments (Page 2: Domain Randomization, Bullet 2). Claim 2: Nagarajan and Weng disclose a method of claim 1, wherein the relearning performs the domain randomization by reflecting data about the failure type collected from at least one or more other local devices (Nagarajan: Column 5, Lines 35-65; simulation variations equivalent to domain randomization performed based on failed grasp). Claim 3: Nagarajan and Weng disclose a method of claim 1, wherein the failure type of the assigned task comprises at least one of recognition failure, manipulation failure, or collision avoidance failure or a combination thereof (Nagarajan: Figure 1 and Column 11, Lines 37-63; manipulation and collision failure). Claim 4: Nagarajan and Weng disclose a method of claim 3, wherein the relearning performs the domain randomization by using, in case of the recognition failure, at least one strategy among a change of a target object in color, texture, lighting and position, parameters of a camera sensor, and class mixture of the target object (Nagarajan: Column 3, Line 58-Column 4, Line 17 (alter parameters), Column 5, Lines 30-35 (parameters include position) and Column 18, Lines 43-57; pixels decreased, Weng: Pages 3, Uniform Domain Randomization (color/texture)). Claim 5: Nagarajan and Weng disclose a method of claim 3, wherein the relearning performs the domain randomization by using, in case of the manipulation failure, at least one strategy among placement of a plurality of target objects with a same class, a change of an initial location and a position of the target object, a change in a physical property of a manipulator of the local device, and a change in a physical property of the target object (Nagarajan: Column 3, Line 58-Column 4, Line 17 and Column 5, Lines 30-35 (parameters include position) Weng: Pages 3, Uniform Domain Randomization (position)). Claim 6: Nagarajan and Weng disclose a method of claim 3, wherein the relearning performs the domain randomization by using, in case of the collision avoidance failure, at least one strategy among generation of random obstacles and then a change in color, texture, lighting and shape, a change in an initial location and a position of the random obstacles, a change in a size scale of the random obstacles, a change in an initial linear velocity and an angular velocity of the random obstacles, application of an external force to the random obstacles, and a change in a physical property of the random obstacles (Nagarajan: Column 3, Line 58-Column 4, Line 17, change positions of objects, Weng: Pages 3, Uniform Domain Randomization (color/texture)). Claim 7: Nagarajan and Weng disclose a method of claim 1, wherein the receiving receives the observation data, a surrounding environment recognition result recognized by a local simulation of the local device, and the policy network of the assigned task (Nagarajan: Column 3, Line 58-Column 4, Line 17, environment parameters captured and Column 5, Lines 35-65; environment and attempt provided to network for modification Weng: Pages 3, Figure 2 (images of environment)). Claim 8: Nagarajan discloses a method for learning a locally-adaptive local device task, the method comprising: obtaining observation data about a surrounding environment (Column 13, Line 65-Column 14, Line 25; observation of surrounding environment); configuring a local simulation environment by using the observation data; predicting possibility of success for an assigned task by using the local simulation environment; (Column 5, Lines 35-65; simulation variations equivalent to domain randomization Column 19, Lines 25-37; simulation of grasp within environment); requesting, to a cloud server, relearning of a policy network of the assigned task, when the assigned task is determined to be a failure; and updating the policy network of the assigned task by receiving a relearned policy network from the cloud server (Column 5, Lines 35-65; simulation variations equivalent to domain randomization provided from server to retrain, Column 9, Lines 42-50; server is a cloud system and Column 19, Lines 25-37; refine/relearn); Nagarajan further discloses wherein the local device is configured to obtain image information of a target object to be tasked in a local environment, classify a class of the object based on the image information, and download a mesh model with a most similar shape to the object based on the class from the cloud server, and perform a local simulation using the mesh model for predicting possibility of success for the task (Column 9, Lines 52-57 (local area network provides operates at a local device); Column 14, Lines 37-48 (camera capture image), Column 15, Lines 8-14 (categorize for classification), Column 15, Lines 49-55 (three-dimensional/mesh model), Column 17, Lines 15-25 (repeat process) and Column 19, Lines 15-24 (update successful attempt)); Nagarajan discloses simulation variation which is closely related to domain randomization, however to disclose explicitly domain randomization, Weng is provided. Weng discloses a system where robotics are trained through model transfer (Page 1, introduction). Further, Weng discloses methods for simulating robotic operation including Domain randomization, where training happens in a source domain with randomization applied. The randomized parameters include position, color, texture or lighting (Pages 2-4, Uniform Domain Randomization). Therefore it would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to use a known technique to improve similar devices in the same way and explicitly utilize domain randomization for the simulations of Nagarajan. One would have been motivated to provide the functionality because this model can effectively adapt to real world environments (Page 2: Domain Randomization, Bullet 2). Claim 9: Nagarajan and Weng disclose a method of claim 8, wherein the requesting of the learning requests relearning of the policy network of the assigned task by providing, to the cloud server, the observation data, the local simulation environment, and the policy network of the assigned task (Nagarajan: Column 3, Line 58-Column 4, Line 17, environment parameters captured and Column 5, Lines 35-65; environment and attempt provided to network for modification, Column 9, Lines 42-50; server is a cloud system). Claim 10: Nagarajan and Weng disclose a method of claim 8, wherein the predicting of the possibility of success predicts possibility of success for at least one of recognition of a target object for the assigned task, manipulation of the target object, or collision avoidance with an obstacle or a combination thereof (Nagarajan: Figure 1 and Column 11, Lines 37-63; manipulation and collision failure). Claim 11 is similar in scope to claim 1 and therefore rejected under the same rationale. Regarding the apparatus (Nagarajan: Figure 1, abstract, Column 22, Lines 9-37); Regarding the sever (Nagarajan: Column 1, Line 44-Column 2, Line 5). Claim 12 is similar in scope to claim 2 and therefore rejected under the same rationale. Claim 13 is similar in scope to claim 3 and therefore rejected under the same rationale. Claim 14 is similar in scope to claim 4 and therefore rejected under the same rationale. Claim 15 is similar in scope to claim 5 and therefore rejected under the same rationale. Claim 16 is similar in scope to claim 6 and therefore rejected under the same rationale. Claim 17 is similar in scope to claim 7 and therefore rejected under the same rationale. Response to Arguments Applicant's arguments have been fully considered and newly cited areas of Nagarajan is disclosed. Nagarajan discloses in Column 9, Lines 52-57 a local area network which provides operates at a local device, Column 14, Lines 37-48 provides a camera capture image, Column 15, Lines 8-14 categorizes for classification, Column 15, Lines 49-55 provides three-dimensional/mesh model, Column 17, Lines 15-25 repeats the process and Column 19, Lines 15-24 will update successful attempts. Regarding the 112 6th, the current language still invokes the analysis, with the configured language. Conclusion The prior art made of record and not relied upon is considered pertinent to Applicant’s disclosure: 20220318459 A1 [0178] Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Applicant is required under 37 C.F.R. § 1.111(c) to consider these references fully when responding to this action. It is noted that any citation to specific pages, columns, lines, or figures in the prior art references and any interpretation of the references should not be considered to be limiting in any way. A reference is relevant for all it contains and may be relied upon for all that it would have reasonably suggested to one having ordinary skill in the art. In re Heck, 699 F.2d 1331, 1332-33, 216 U.S.P.Q. 1038, 1039 (Fed. Cir. 1983) (quoting In re Lemelson, 397 F.2d 1006, 1009, 158 U.S.P.Q. 275, 277 (C.C.P.A. 1968)). In the interests of compact prosecution, Applicant is invited to contact the examiner via electronic media pursuant to USPTO policy outlined MPEP § 502.03. All electronic communication must be authorized in writing. Applicant may wish to file an Internet Communications Authorization Form PTO/SB/439. Applicant may wish to request an interview using the Interview Practice website: http://www.uspto.gov/patent/laws-and-regulations/interview-practice. Applicant is reminded Internet e-mail may not be used for communication for matters under 35 U.S.C. § 132 or which otherwise require a signature. A reply to an Office action may NOT be communicated by Applicant to the USPTO via Internet e-mail. If such a reply is submitted by Applicant via Internet e-mail, a paper copy will be placed in the appropriate patent application file with an indication that the reply is NOT ENTERED. See MPEP § 502.03(II). Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHERROD KEATON whose telephone number is 571-270-1697. The examiner can normally be reached 9:30am to 5:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, Applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor MICHELLE BECHTOLD can be reached at 571-431-0762. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SHERROD L KEATON/ Primary Examiner, Art Unit 2148 1-19-2026
Read full office action

Prosecution Timeline

Sep 09, 2022
Application Filed
Jun 28, 2025
Non-Final Rejection — §103, §112, §Other
Oct 02, 2025
Response Filed
Jan 23, 2026
Final Rejection — §103, §112, §Other (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12566823
SYSTEMS AND METHODS FOR INTERPOLATIVE CENTROID CONTRASTIVE LEARNING
2y 5m to grant Granted Mar 03, 2026
Patent 12547820
Automated Generation Of Commentator-Specific Scripts
2y 5m to grant Granted Feb 10, 2026
Patent 12530587
SYSTEMS AND METHODS FOR CONTRASTIVE LEARNING WITH SELF-LABELING REFINEMENT
2y 5m to grant Granted Jan 20, 2026
Patent 12524147
Modality Learning on Mobile Devices
2y 5m to grant Granted Jan 13, 2026
Patent 12524603
METHODS FOR RECOGNIZING AND INTERPRETING GRAPHIC ELEMENTS
2y 5m to grant Granted Jan 13, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
52%
Grant Probability
88%
With Interview (+36.1%)
4y 6m
Median Time to Grant
Moderate
PTA Risk
Based on 563 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month